Abstract - DB2 10 for z/OS - Where we are today, and where we are going. This session will take you through the latest with DB2 10. What functions are customers finding most valuable, what the latest enhancements are, and what is the current status of DB2 10 in the marketplace? We will also take you through the latest on DB2 11, the status of the ESP, and also touch on some industry trends that are influencing the enhancements that we are planning for DB2 in the future.
This document discusses considerations for migrating to DB2 10 from earlier versions. It notes that IBM is ending support for DB2 V8 in 2012, prompting many organizations to migrate. Key topics covered include potential issues with skipping versions in migration, features deprecated in later versions, checking software prerequisites, and rebinding plans and packages to adjust to changes in access paths. The document aims to provide guidance on planning a smoother migration process.
This document provides an overview and agenda for a presentation on tips and techniques for DB2 for z/OS. The presentation covers various topics including performance management, EDM pool tuning, SQL and application tuning, and data integrity. It emphasizes the importance of understanding access paths, managing commits, regular rebinding, and choosing appropriate data types and lengths.
The Five R's: There Can be no DB2 Performance Improvement Without Them!Craig Mullins
We know that BIND and REBIND are important components in assuring optimal application performance. It is the bind process that determines exactly how your DB2 data is accessed in your application programs. But binding requires statistics for the optimizer to use... and if the data is disorganized even current stats might not help... and you have to make sure that you check on the results of binding... and... well, let's just say this short presentations examines all of these issues and more.
An Intro to Tuning Your SQL on DB2 for z/OSWillie Favero
This document provides an introduction to SQL tuning for a DB2 for z/OS environment. It was presented on March 1, 2011 by Willie Favero from IBM's Data Warehouse on System z Swat Team. The presentation covers various techniques for optimizing SQL queries and access paths in DB2 for z/OS, with the goal of improving query performance. It addresses topics such as monitoring wait times, buffer pool usage, checkpointing, WLM policies, sort techniques, and disk I/O optimization. The overall aim is to help database administrators understand how to analyze and "tune" queries to reduce response times and meet business performance objectives.
This 3-page document provides an overview of database administration practices and procedures. It begins with an agenda that lists topics such as the roles and tasks of a DBA. The document then discusses what a DBA is and their responsibilities, which include database design, security, backups and more. It also covers related topics such as performance management, data availability, and database change management.
Planning and executing a DB2 11 for z/OS Migration by Ian Cook Surekha Parekh
This document discusses planning and executing a migration from DB2 10 to DB2 11 for z/OS. It begins with an overview of the DB2 11 Early Support Program (ESP) feedback, which was positive regarding performance, quality, and reliability. The presentation then covers key aspects of developing a migration project plan, including assembling a project team, identifying technical considerations, and creating a test plan. It emphasizes early elimination of risks and issues. Sample project frameworks are provided to help structure planning and testing across sandbox, development, and production environments. Attendees are advised to contact software vendors to coordinate DB2 version requirements.
Best Practices For Optimizing DB2 Performance FinalDatavail
DB2 performance tuning and optimization is a complex issue comprising multiple sub-disciplines and levels of expertise. Mastering all of the nuances can take an entire career. Deploying standard best practices can minimize the effort to achieve efficient DB2 applications and databases.
This white paper outlines the most important aspects and ingredients of successful DB2 for z/ OS performance management. It offers multiple guidelines and tips for improving performance within the three major performance tuning categories required of every DB2 implementation: the application, the database and the system.
This document discusses considerations for migrating to DB2 10 from earlier versions. It notes that IBM is ending support for DB2 V8 in 2012, prompting many organizations to migrate. Key topics covered include potential issues with skipping versions in migration, features deprecated in later versions, checking software prerequisites, and rebinding plans and packages to adjust to changes in access paths. The document aims to provide guidance on planning a smoother migration process.
This document provides an overview and agenda for a presentation on tips and techniques for DB2 for z/OS. The presentation covers various topics including performance management, EDM pool tuning, SQL and application tuning, and data integrity. It emphasizes the importance of understanding access paths, managing commits, regular rebinding, and choosing appropriate data types and lengths.
The Five R's: There Can be no DB2 Performance Improvement Without Them!Craig Mullins
We know that BIND and REBIND are important components in assuring optimal application performance. It is the bind process that determines exactly how your DB2 data is accessed in your application programs. But binding requires statistics for the optimizer to use... and if the data is disorganized even current stats might not help... and you have to make sure that you check on the results of binding... and... well, let's just say this short presentations examines all of these issues and more.
An Intro to Tuning Your SQL on DB2 for z/OSWillie Favero
This document provides an introduction to SQL tuning for a DB2 for z/OS environment. It was presented on March 1, 2011 by Willie Favero from IBM's Data Warehouse on System z Swat Team. The presentation covers various techniques for optimizing SQL queries and access paths in DB2 for z/OS, with the goal of improving query performance. It addresses topics such as monitoring wait times, buffer pool usage, checkpointing, WLM policies, sort techniques, and disk I/O optimization. The overall aim is to help database administrators understand how to analyze and "tune" queries to reduce response times and meet business performance objectives.
This 3-page document provides an overview of database administration practices and procedures. It begins with an agenda that lists topics such as the roles and tasks of a DBA. The document then discusses what a DBA is and their responsibilities, which include database design, security, backups and more. It also covers related topics such as performance management, data availability, and database change management.
Planning and executing a DB2 11 for z/OS Migration by Ian Cook Surekha Parekh
This document discusses planning and executing a migration from DB2 10 to DB2 11 for z/OS. It begins with an overview of the DB2 11 Early Support Program (ESP) feedback, which was positive regarding performance, quality, and reliability. The presentation then covers key aspects of developing a migration project plan, including assembling a project team, identifying technical considerations, and creating a test plan. It emphasizes early elimination of risks and issues. Sample project frameworks are provided to help structure planning and testing across sandbox, development, and production environments. Attendees are advised to contact software vendors to coordinate DB2 version requirements.
Best Practices For Optimizing DB2 Performance FinalDatavail
DB2 performance tuning and optimization is a complex issue comprising multiple sub-disciplines and levels of expertise. Mastering all of the nuances can take an entire career. Deploying standard best practices can minimize the effort to achieve efficient DB2 applications and databases.
This white paper outlines the most important aspects and ingredients of successful DB2 for z/ OS performance management. It offers multiple guidelines and tips for improving performance within the three major performance tuning categories required of every DB2 implementation: the application, the database and the system.
This document discusses the benefits of IBM DB2 software in SAP environments. It provides examples of customers like Colgate-Palmolive and Coca-Cola Bottling Co. that achieved significant cost savings and performance improvements after migrating their SAP systems from Oracle to IBM DB2. One Swiss customer tested DB2 and Oracle on comparable hardware and found DB2 performed 48% better while using 30% less memory. DB2 also provided greater data compression and backup compression. The document outlines other advantages of DB2 like reduced storage needs, improved OLTP and OLAP performance, and lower licensing costs.
IBM DB2 Analytics Accelerator Trends & Directions by Namik Hrle Surekha Parekh
IBM DB2 Analytics Accelerator has drawn lots of attention from DB2 for z/OS users. In many respects it presents itself as just another DB2 access path (but what a powerful one!) and its deep integration into DB2 as well as application transparency makes it one of the most exciting DB2 enhancements in years. The IBM DB2 Analytics Accelerator complements DB2 by adding industry leading data intensive complex query performance thanks to being powered by the Netezza engine and enhances DB2 to the ultimate database management system that delivers the best of both worlds: transactional as well as analytical workloads. This presentation brings the latest news from the IDAA development and shows the trends and directions in which this technology develops.
This document provides an overview of new features and enhancements in DB2 9.7. Key highlights include improvements to compression, which now supports multiple automatic index compression algorithms and automatic compression of temporary tables. Other improvements focus on resource optimization through storage I/O optimization and ease of storage management, as well as ongoing flexibility through support for schema evolution and online table moves. The document also discusses enhancements for partitioned tables such as local partitioned indexes and improved partitioned table maintenance.
DB2 UDB for z/OS Version 7 - An OverviewCraig Mullins
DB2 Version 7 includes many new features and enhancements across several areas:
1) e-Business features like XML support, improved Net.Data macros, and Unicode encoding.
2) Application features such as stored procedure enhancements, scrollable cursors, and row expressions.
3) Data management features including identity columns, declared temporary tables, and utility improvements.
4) Business intelligence features such as a new data warehouse manager.
5) Enhanced compatibility across the DB2 family of products on different platforms.
Bank Data Frank Peterson DB2 10-Early_Experiences_pdfSurekha Parekh
DB2 for z/OS update seminar focused on Bankdata's experiences testing DB2 10 during the beta process. Key items tested included hash access to data, XML engine schema validation, XML multi-versioning, and other new features. Testing revealed surprises around administrative overhead and challenges completing performance tests. Results showed hash access provided CPU savings compared to non-hash access when data is relatively static. XML schema validation was moved to the engine for improved performance.
DB2 pureScale provides a highly scalable and available database solution. It allows customers to start small and grow capacity easily by adding additional cluster members without disrupting applications or incurring extra costs. DB2 pureScale uses a shared nothing architecture with each member running on its own server. It provides a single system view to clients and automatically balances workload across members. Critical features include unlimited scalability, continuous availability even during member failures, and the ability to perform maintenance without outages.
The document does not contain enough content to summarize. It only contains the word "Adv" which provides no meaningful context or information to extract a multi-sentence summary from.
DB2 pureScale is a new DB2 feature that allows a DB2 database to span multiple database servers for increased availability, scalability and flexible capacity. It uses a shared disk architecture with Global Parallel File System technology to provide a single database image across nodes. Key components include Cluster Services, InfiniBand networking, global bufferpool and lock manager to coordinate data access and concurrency across nodes. The technology is still in development with initial support for AIX on Power hardware.
This document provides a summary of Jon Smith's experience as a DB2 DBA over 20 years. His experience includes working with large DB2 databases up to 80TB in size on AIX, Linux, and Windows platforms. He has experience migrating databases from DB2 9.1 to 10.5, installing and administering HADR, performing backups and restores, database performance tuning, and working with various companies including Johnson & Johnson, Northwestern Mutual, Crate and Barrel, and TransUnion.
PCTY 2012, Kundecase: Succesfuld opgradering fra TSM 5 til 6 v. Klavs KabellIBM Danmark
The document summarizes a successful upgrade of a TSM database from version 5 to version 6. Key aspects included thorough planning, testing the upgrade twice, and carefully documenting the 62 step process. This ensured the production upgrade went smoothly.
Ashish Kumar Padhi is a DB2 System Programmer with over 3 years of experience in IT industries like IBM India and Tata Consultancy Services. He has extensive experience in administering and maintaining DB2 databases, performing backups, applying fixes and monitoring performance. He is looking for a new role where he can continue developing his skills while contributing to organizational growth.
DB2 10 Webcast #2 - Justifying The UpgradeLaura Hood
This document discusses justifying an upgrade from DB2 9 or 8 to DB2 10 for z/OS. It outlines potential CPU, productivity, and availability savings from the upgrade. CPU savings can come from improved performance in conversion mode through features like high performance database application transition support. Productivity savings may result from features that improve plan stability and temporal tables. Availability improvements like online reorganization of LOBs can reduce downtime costs. The presentation recommends using IBM's DB2 10 Business Value Assessment Estimator Tool to quantify specific savings for an organization.
This document discusses justifying an upgrade from DB2 9 or 8 to DB2 10 for z/OS. It outlines potential CPU, productivity, and availability savings from the upgrade. CPU savings can come from improved performance in conversion mode through features like high performance database access threads. Productivity savings may result from reduced subsystem consolidation time. Availability improvements like online REORG for LOBs can reduce downtime costs. The presentation recommends using IBM's Business Value Assessment Estimator Tool to quantify specific savings for an organization.
Sahar Kamal has over 8 years of experience as a senior SAP FICO consultant. She has an SAP FI certification and degrees in accounting. She has worked on numerous SAP implementations for companies in various industries, taking on roles such as business analysis, configuration, training, and support. Her experience includes full life cycle implementations from project preparation through go-live and support.
Innovate 2014 - What's New in Reporting and AnalyticsDragos Cojocari
The document discusses rational reporting and analytics products. It provides an overview of new features in RRDI 5.0 and Insight 1.1.4 including improved performance of the data collection component and new agile reports. It outlines the roadmap focus areas including self-serve reporting, DevOps measures, and unlocking engineering data. The reporting architecture is shown including the jazz reporting service, rational publishing engine, and Cognos for custom reports.
Srinivas Reddy Pocha has over 12 years of experience in SAP ABAP development. He has worked on projects for clients like BMW, Nestle, Merck, and British Petroleum. His technical skills include ABAP, HANA modeling, UI5, and SAP SLT. He has experience developing enhancements, interfaces, and reports for modules like SD, MM, FI/CO, and real estate. Pocha has worked in both onshore and offshore roles in India and Germany.
The document summarizes British American Tobacco's (BAT) upgrade of their PeopleSoft payroll system from version 9.1 to 9.2. Key points include:
- BAT brought payroll processing in-house in 2003 and implemented PeopleSoft Global Payroll in 2004.
- In 2013, BAT decided to upgrade from 9.1 to 9.2 to remain supported and evaluate new functionality. Cedar Consulting managed the upgrade project.
- The upgrade went live successfully in December 2013. It addressed issues with court orders processing and allowed BAT to use new features like positive input templates and employee loans.
- Parallel testing showed the payroll results matched between the old and upgraded systems. The upgrade future-
OBIA with ERP Upgrade-Leverage Packaged Analytics when Upgrading!Emtec Inc.
Just what are the advantages of implementing packaged analytics (OBIA) with your ERP upgrade? Significant advantages and cost savings are available when these two key initiatives are combined.
BI Opportunities add value to the upgrade!
Swati Gupta has over 7 years of experience in IT with a focus on data warehousing, ETL, and business intelligence reporting. She has led teams of up to 14 professionals and has extensive experience designing and developing solutions using tools like IBM DataStage, SAP Business Objects, Cognos, and QlikView. Currently she is a Data Warehousing Specialist and Business Process Manager at TCS working on the Chrysler project involving sales, marketing, and other domains.
Getting the Most Out of Oracle's EPM Cloud ServicesAlithya
Ranzal implemented Oracle EPM Cloud services including PBCS, FCCS, EDMCS, and FDMEE for Baha Mar to support daily operational reporting, budgeting and forecasting, monthly financial close, and data management. Key aspects included automating the monthly close in FCCS, daily driver-based forecasting in PBCS, leveraging EDMCS for shared hierarchies, and using FDMEE to integrate source system data. The implementation went live successfully and provided benefits like improved accuracy, a single source of financial data, and automated reporting. Lessons learned included thorough requirements, early data gathering, and ensuring client resource availability. The next steps involve implementing account reconciliation in ARCS.
Uthant has over 21 years of experience working on SAP BW/BI projects. He has extensive experience in modeling, ETL, and reporting for various SAP modules. Some of the key projects he has worked on include implementations for Spencer Retailers, Foresight Energy, Bosch Power Tools, Avon Cycles, and LANCO Industries. He has experience working with SAP BW versions 3.5 to 7.5 as well as SAP BOBJ and HANA.
This document discusses the benefits of IBM DB2 software in SAP environments. It provides examples of customers like Colgate-Palmolive and Coca-Cola Bottling Co. that achieved significant cost savings and performance improvements after migrating their SAP systems from Oracle to IBM DB2. One Swiss customer tested DB2 and Oracle on comparable hardware and found DB2 performed 48% better while using 30% less memory. DB2 also provided greater data compression and backup compression. The document outlines other advantages of DB2 like reduced storage needs, improved OLTP and OLAP performance, and lower licensing costs.
IBM DB2 Analytics Accelerator Trends & Directions by Namik Hrle Surekha Parekh
IBM DB2 Analytics Accelerator has drawn lots of attention from DB2 for z/OS users. In many respects it presents itself as just another DB2 access path (but what a powerful one!) and its deep integration into DB2 as well as application transparency makes it one of the most exciting DB2 enhancements in years. The IBM DB2 Analytics Accelerator complements DB2 by adding industry leading data intensive complex query performance thanks to being powered by the Netezza engine and enhances DB2 to the ultimate database management system that delivers the best of both worlds: transactional as well as analytical workloads. This presentation brings the latest news from the IDAA development and shows the trends and directions in which this technology develops.
This document provides an overview of new features and enhancements in DB2 9.7. Key highlights include improvements to compression, which now supports multiple automatic index compression algorithms and automatic compression of temporary tables. Other improvements focus on resource optimization through storage I/O optimization and ease of storage management, as well as ongoing flexibility through support for schema evolution and online table moves. The document also discusses enhancements for partitioned tables such as local partitioned indexes and improved partitioned table maintenance.
DB2 UDB for z/OS Version 7 - An OverviewCraig Mullins
DB2 Version 7 includes many new features and enhancements across several areas:
1) e-Business features like XML support, improved Net.Data macros, and Unicode encoding.
2) Application features such as stored procedure enhancements, scrollable cursors, and row expressions.
3) Data management features including identity columns, declared temporary tables, and utility improvements.
4) Business intelligence features such as a new data warehouse manager.
5) Enhanced compatibility across the DB2 family of products on different platforms.
Bank Data Frank Peterson DB2 10-Early_Experiences_pdfSurekha Parekh
DB2 for z/OS update seminar focused on Bankdata's experiences testing DB2 10 during the beta process. Key items tested included hash access to data, XML engine schema validation, XML multi-versioning, and other new features. Testing revealed surprises around administrative overhead and challenges completing performance tests. Results showed hash access provided CPU savings compared to non-hash access when data is relatively static. XML schema validation was moved to the engine for improved performance.
DB2 pureScale provides a highly scalable and available database solution. It allows customers to start small and grow capacity easily by adding additional cluster members without disrupting applications or incurring extra costs. DB2 pureScale uses a shared nothing architecture with each member running on its own server. It provides a single system view to clients and automatically balances workload across members. Critical features include unlimited scalability, continuous availability even during member failures, and the ability to perform maintenance without outages.
The document does not contain enough content to summarize. It only contains the word "Adv" which provides no meaningful context or information to extract a multi-sentence summary from.
DB2 pureScale is a new DB2 feature that allows a DB2 database to span multiple database servers for increased availability, scalability and flexible capacity. It uses a shared disk architecture with Global Parallel File System technology to provide a single database image across nodes. Key components include Cluster Services, InfiniBand networking, global bufferpool and lock manager to coordinate data access and concurrency across nodes. The technology is still in development with initial support for AIX on Power hardware.
This document provides a summary of Jon Smith's experience as a DB2 DBA over 20 years. His experience includes working with large DB2 databases up to 80TB in size on AIX, Linux, and Windows platforms. He has experience migrating databases from DB2 9.1 to 10.5, installing and administering HADR, performing backups and restores, database performance tuning, and working with various companies including Johnson & Johnson, Northwestern Mutual, Crate and Barrel, and TransUnion.
PCTY 2012, Kundecase: Succesfuld opgradering fra TSM 5 til 6 v. Klavs KabellIBM Danmark
The document summarizes a successful upgrade of a TSM database from version 5 to version 6. Key aspects included thorough planning, testing the upgrade twice, and carefully documenting the 62 step process. This ensured the production upgrade went smoothly.
Ashish Kumar Padhi is a DB2 System Programmer with over 3 years of experience in IT industries like IBM India and Tata Consultancy Services. He has extensive experience in administering and maintaining DB2 databases, performing backups, applying fixes and monitoring performance. He is looking for a new role where he can continue developing his skills while contributing to organizational growth.
DB2 10 Webcast #2 - Justifying The UpgradeLaura Hood
This document discusses justifying an upgrade from DB2 9 or 8 to DB2 10 for z/OS. It outlines potential CPU, productivity, and availability savings from the upgrade. CPU savings can come from improved performance in conversion mode through features like high performance database application transition support. Productivity savings may result from features that improve plan stability and temporal tables. Availability improvements like online reorganization of LOBs can reduce downtime costs. The presentation recommends using IBM's DB2 10 Business Value Assessment Estimator Tool to quantify specific savings for an organization.
This document discusses justifying an upgrade from DB2 9 or 8 to DB2 10 for z/OS. It outlines potential CPU, productivity, and availability savings from the upgrade. CPU savings can come from improved performance in conversion mode through features like high performance database access threads. Productivity savings may result from reduced subsystem consolidation time. Availability improvements like online REORG for LOBs can reduce downtime costs. The presentation recommends using IBM's Business Value Assessment Estimator Tool to quantify specific savings for an organization.
Sahar Kamal has over 8 years of experience as a senior SAP FICO consultant. She has an SAP FI certification and degrees in accounting. She has worked on numerous SAP implementations for companies in various industries, taking on roles such as business analysis, configuration, training, and support. Her experience includes full life cycle implementations from project preparation through go-live and support.
Innovate 2014 - What's New in Reporting and AnalyticsDragos Cojocari
The document discusses rational reporting and analytics products. It provides an overview of new features in RRDI 5.0 and Insight 1.1.4 including improved performance of the data collection component and new agile reports. It outlines the roadmap focus areas including self-serve reporting, DevOps measures, and unlocking engineering data. The reporting architecture is shown including the jazz reporting service, rational publishing engine, and Cognos for custom reports.
Srinivas Reddy Pocha has over 12 years of experience in SAP ABAP development. He has worked on projects for clients like BMW, Nestle, Merck, and British Petroleum. His technical skills include ABAP, HANA modeling, UI5, and SAP SLT. He has experience developing enhancements, interfaces, and reports for modules like SD, MM, FI/CO, and real estate. Pocha has worked in both onshore and offshore roles in India and Germany.
The document summarizes British American Tobacco's (BAT) upgrade of their PeopleSoft payroll system from version 9.1 to 9.2. Key points include:
- BAT brought payroll processing in-house in 2003 and implemented PeopleSoft Global Payroll in 2004.
- In 2013, BAT decided to upgrade from 9.1 to 9.2 to remain supported and evaluate new functionality. Cedar Consulting managed the upgrade project.
- The upgrade went live successfully in December 2013. It addressed issues with court orders processing and allowed BAT to use new features like positive input templates and employee loans.
- Parallel testing showed the payroll results matched between the old and upgraded systems. The upgrade future-
OBIA with ERP Upgrade-Leverage Packaged Analytics when Upgrading!Emtec Inc.
Just what are the advantages of implementing packaged analytics (OBIA) with your ERP upgrade? Significant advantages and cost savings are available when these two key initiatives are combined.
BI Opportunities add value to the upgrade!
Swati Gupta has over 7 years of experience in IT with a focus on data warehousing, ETL, and business intelligence reporting. She has led teams of up to 14 professionals and has extensive experience designing and developing solutions using tools like IBM DataStage, SAP Business Objects, Cognos, and QlikView. Currently she is a Data Warehousing Specialist and Business Process Manager at TCS working on the Chrysler project involving sales, marketing, and other domains.
Getting the Most Out of Oracle's EPM Cloud ServicesAlithya
Ranzal implemented Oracle EPM Cloud services including PBCS, FCCS, EDMCS, and FDMEE for Baha Mar to support daily operational reporting, budgeting and forecasting, monthly financial close, and data management. Key aspects included automating the monthly close in FCCS, daily driver-based forecasting in PBCS, leveraging EDMCS for shared hierarchies, and using FDMEE to integrate source system data. The implementation went live successfully and provided benefits like improved accuracy, a single source of financial data, and automated reporting. Lessons learned included thorough requirements, early data gathering, and ensuring client resource availability. The next steps involve implementing account reconciliation in ARCS.
Uthant has over 21 years of experience working on SAP BW/BI projects. He has extensive experience in modeling, ETL, and reporting for various SAP modules. Some of the key projects he has worked on include implementations for Spencer Retailers, Foresight Energy, Bosch Power Tools, Avon Cycles, and LANCO Industries. He has experience working with SAP BW versions 3.5 to 7.5 as well as SAP BOBJ and HANA.
Utilizing BI 11g Reporting To Get The Most Out of P6p6academy
This document discusses utilizing Oracle's Business Intelligence (BI) 11g reporting capabilities to create reports from Primavera P6 project data. It provides an overview of the history of reporting in P6 and how it has evolved from standalone scripts to being integrated with BI. It then covers the basic steps for building a report in BI, including developing the data model, parameters, and layout. The document concludes with tips for troubleshooting issues and links to documentation resources.
This document provides a summary of Arun Kumar Agrawal's professional profile. He has over 12 years of experience working with SAP solutions such as HANA, ABAP, and Workflow. Currently he works as a technical specialist at SAP Solution Delivery Center, formerly known as SAP Global Delivery, where he has worked since 2011. He has extensive experience working on implementations, rollouts, and support for various SAP modules such as IS-Oil, FSCM, and HCM for clients in industries such as oil and gas, ports, banking, and retail.
Upgrade JDE Quicker, Faster, and More PredictableTerillium
There is never a good time to upgrade JDE. You have to make the time. Read how three clients made the time to do a simplified upgrade to support their business initiatives.
OOW15 - Customer Success Stories: Upgrading to Oracle E-Business Suite 12.2 vasuballa
Oracle E-Business Suite Release 12.2 momentum is growing. Come to this session and hear from Oracle and its customers regarding experiences deploying and running Oracle E-Business Suite 12.2. Customers will discuss lessons learned in the process of upgrading and reveal how they are using new functionality in Oracle E-Business Suite 12.2, including the Online Patching feature.
This document provides a summary of Pradeep Alurkar's career experience in telecom and IT. Over 25 years, he has worked on various projects for companies in telecom, finance, and other industries. Some of his key experiences include:
- Working as a senior business analyst and solution architect for Amdocs on CRM/BSS projects for telecom companies globally.
- Leading the design of a new API for quoting, ordering, and managing business network products for TalkTalk in the UK.
- Providing consulting services to transform IT systems and processes for companies like AT&T, British Telecom, and Reliance Communications.
- Managing projects and serving as a technical lead for CR
The document discusses metrics for measuring the performance of a product management group, including business value delivered, technical debt, and wastes related to product management. It provides suggestions for tracking adoption of features, technical debt, scope changes, sunk costs, rework, workload compared to development, and release overhead and frequency. Key performance indicators include lead time, work-in-progress, and cycle times for different classes of service. Reducing overhead and increasing appropriate release frequency can help align business needs with development capabilities.
Arokia Raj has over 9 years of experience in IT with a focus on project management and release management. He has worked on several major projects involving derivative trading, mobile applications, and technical upgrades. Some of his responsibilities have included managing releases, coordinating resources, addressing issues, and ensuring smooth implementations. He is proficient in tools like Jenkins, GIT, JIRA and has a degree in computer science.
Sudeshna Ghosh Dastidar has over 6 years of experience as an IT Analyst at TCS. She has expertise in design, development, testing, implementation, and reporting using tools like Crystal Reports, SAP BO, and SQL. She is proficient in Oracle, SQL, PL/SQL, and has experience in data warehousing and ETL using Informatica. She has worked on projects in domains like banking, financial services, insurance, and telecom.
Sidharth Kumar is seeking a managerial role utilizing over 4 years of experience in SAP implementations and support projects. He has experience implementing SAP modules such as E-REC, PMS, HR, and managing onsite/offshore teams. His technical skills include ABAP, Webdynpro, BAPI, and he has experience developing reports, interfaces and debugging issues in system upgrades. His work history includes roles on implementation projects for clients such as HDFC, Mitchell & Butlers, and system integration testing for multiple clients.
Similar to DB2 10 Migration Planning & Customer experiences - Chris Crone (IDUG India) (20)
This document discusses information management for IBM's System z mainframe systems. It covers several sections on database and data management solutions for System z, including DB2 for z/OS, Information Management System, and z/OS Distributed Data Facility. These solutions help customers manage large volumes of transactional and analytical data on the System z platform through database, data sharing, and data replication capabilities.
IBM Analytics Accelerator Trends & Directions Namk Hrle Surekha Parekh
IBM DB2 Analytics Accelerator has drawn lots of attention from DB2 for z/OS users. In many respects it presents itself as just another DB2 access path (but what a powerful one!) and its deep integration into DB2 as well as application transparency makes it one of the most exciting DB2 enhancements in years. The IBM DB2 Analytics Accelerator complements DB2 by adding industry leading data intensive complex query performance thanks to being powered by the Netezza engine and enhances DB2 to the ultimate database management system that delivers the best of both worlds: transactional as well as analytical workloads. This presentation brings the latest news from the IDAA development and shows the trends and directions in which this technology develops.
DB2 Design for High Availability and ScalabilitySurekha Parekh
Are you overwhelmed by the growing amount of data in your environment? Are you maximizing application availability? As the number of tables with billions of rows continues to grow, so do the management challenges. In this session, we will discuss the challenges and solutions for optimum availability and performance, with techniques to efficiently and effectively manage very large amounts of data.
Tools for developing and monitoring SQL in DB2 for z/OSSurekha Parekh
Building optimal applications against DB2 for z/OS is often difficult. As most of the real issues relate to maintenancence and changes in the surrounding parameters it’s necessary to define a methodology which enables performance monitoring as well as optimization in existing code.
By knowing more about how the SQL in your shop performs, a lot can be done to to anticipate future problems, such as identifying performance bottlenecks before they impact users and add to IT costs. By building a performance history table, where you monitor performance as time passes, and changes in the environment occur, it’s possible to be proactive, and optimize before the costs turns red.
Gain Insight Into DB2 9 And DB2 10 for z/OS Performance Updates And Save Cost...Surekha Parekh
In this session, we will discuss the latest updates for system
and application performance on IBM DB2 9 and DB2 10 for
z/OS. Beginning with performance impact and tuning at the
system and application level, we’ll have a special focus on
topics requested by product representatives and field inquiries.
This session will also cover DB2 10 for z/OS and its improved
performance and scalability — including general CPU usage
reduction and scalability, buffer management, and insert and
select functionality — in addition to the reduction of virtual
storage constraints. Other topics include improvements to DDF,
JDBC, SQLPL and line-of-business performance. You’ll also
learn how DB2 9 and DB2 10 interact with IBM z10 and z11
processors.
System z Technology Summit Streamlining UtilitiesSurekha Parekh
Most DB2 applications are global non-stop, requiring almost
100% accessibility. Availability demands reduce the amount
of time available to perform necessary routine tasks, such as
utility maintenance on the underlying data and objects stored
in DB2 for z/OS that support critical business applications. In
addition, companies are looking for ways to streamline DB2 utility
processing to maximize system and personnel resources. How
valuable would it be to maximize your use of IBM DB2 Utilities
Suite for z/OS for both DB2 9 and DB2 10? What if you could
establish DB2 utility practices at a company level and know
that they would be monitored and adhered to? Do you want
to reduce your batch window during utility sort processing to
improve availability and performance? How important would it
be to run utilities only on objects when and if it’s necessary?
The answers to these questions and more will be revealed in
this session.
DB2 for z/OS Update Data Warehousing On System ZSurekha Parekh
Abstract:
Data Warehouses delivers the floor in most Business analytics solution. Recent analysis reveals that the demand for near-real-time data, as well as integration between day-to-day business applications increases.
IBM will share insight on today’s business environment and its impact on an IT organization’s ability to deliver a competitive Business Analytics and Data Warehousing strategy. You will learn how DB2 for z/OS combined with other InfoSphere data movement offerings from IBM can help enable information on demand – user demands.
What’s New For SQL Optimization In DB2 9 And DB2 10 For z/OSSurekha Parekh
Abstract:
Reducing the total cost of ownership is everyone’s challenge, and it’s one of the key benefits of query optimization. Since its initial inception in DB2® for z/OS®, the cost-based Optimizer has continually evolved to deliver expert-based query and workload analysis and many other performance-enhancing functions. Based on the latest use by customers, the Optimizer delivers even more in DB2 10. This session will show you how the Optimizer in DB2 10 for z/OS can help reduce your total cost of ownership. We’ll discuss incremental query optimization enhancements available with the upcoming release of DB2 10 that build on DB2 9 enhancements, such as global query optimization. This session will cover the insight discovered in early usage, provide the reason for each enhancement and highlight the most important enhancements.
DB2 9 for z/OS is a major release of DB2 that provides technical and business benefits such as reduced costs, improved availability, and reduced total cost of ownership. It also enables new applications through features like integrated XML support, native SQL procedures, and improved performance for queries, inserts, deletes, and more. Univar, a large chemical distributor, migrated to DB2 9 and has seen benefits from features like partition by growth tablespaces, native SQL procedures, text search, and XML support.
1. DB2 Data Sharing allows applications running on multiple DB2 subsystems to concurrently read and write to the same data, providing high scalability, performance, and continuous availability.
2. It provides benefits like increased capacity, continuous availability during planned and unplanned outages, easier growth accommodation, and dynamic workload balancing.
3. The Parallel Sysplex and Data Sharing architecture, along with features like rolling maintenance and dynamic workload balancing, work to ensure continuous availability even if a DB2 subsystem or z/OS system fails.
it describes the bony anatomy including the femoral head , acetabulum, labrum . also discusses the capsule , ligaments . muscle that act on the hip joint and the range of motion are outlined. factors affecting hip joint stability and weight transmission through the joint are summarized.
Executive Directors Chat Leveraging AI for Diversity, Equity, and InclusionTechSoup
Let’s explore the intersection of technology and equity in the final session of our DEI series. Discover how AI tools, like ChatGPT, can be used to support and enhance your nonprofit's DEI initiatives. Participants will gain insights into practical AI applications and get tips for leveraging technology to advance their DEI goals.
This slide is special for master students (MIBS & MIFB) in UUM. Also useful for readers who are interested in the topic of contemporary Islamic banking.
A workshop hosted by the South African Journal of Science aimed at postgraduate students and early career researchers with little or no experience in writing and publishing journal articles.
How to Fix the Import Error in the Odoo 17Celine George
An import error occurs when a program fails to import a module or library, disrupting its execution. In languages like Python, this issue arises when the specified module cannot be found or accessed, hindering the program's functionality. Resolving import errors is crucial for maintaining smooth software operation and uninterrupted development processes.
LAND USE LAND COVER AND NDVI OF MIRZAPUR DISTRICT, UPRAHUL
This Dissertation explores the particular circumstances of Mirzapur, a region located in the
core of India. Mirzapur, with its varied terrains and abundant biodiversity, offers an optimal
environment for investigating the changes in vegetation cover dynamics. Our study utilizes
advanced technologies such as GIS (Geographic Information Systems) and Remote sensing to
analyze the transformations that have taken place over the course of a decade.
The complex relationship between human activities and the environment has been the focus
of extensive research and worry. As the global community grapples with swift urbanization,
population expansion, and economic progress, the effects on natural ecosystems are becoming
more evident. A crucial element of this impact is the alteration of vegetation cover, which plays a
significant role in maintaining the ecological equilibrium of our planet.Land serves as the foundation for all human activities and provides the necessary materials for
these activities. As the most crucial natural resource, its utilization by humans results in different
'Land uses,' which are determined by both human activities and the physical characteristics of the
land.
The utilization of land is impacted by human needs and environmental factors. In countries
like India, rapid population growth and the emphasis on extensive resource exploitation can lead
to significant land degradation, adversely affecting the region's land cover.
Therefore, human intervention has significantly influenced land use patterns over many
centuries, evolving its structure over time and space. In the present era, these changes have
accelerated due to factors such as agriculture and urbanization. Information regarding land use and
cover is essential for various planning and management tasks related to the Earth's surface,
providing crucial environmental data for scientific, resource management, policy purposes, and
diverse human activities.
Accurate understanding of land use and cover is imperative for the development planning
of any area. Consequently, a wide range of professionals, including earth system scientists, land
and water managers, and urban planners, are interested in obtaining data on land use and cover
changes, conversion trends, and other related patterns. The spatial dimensions of land use and
cover support policymakers and scientists in making well-informed decisions, as alterations in
these patterns indicate shifts in economic and social conditions. Monitoring such changes with the
help of Advanced technologies like Remote Sensing and Geographic Information Systems is
crucial for coordinated efforts across different administrative levels. Advanced technologies like
Remote Sensing and Geographic Information Systems
9
Changes in vegetation cover refer to variations in the distribution, composition, and overall
structure of plant communities across different temporal and spatial scales. These changes can
occur natural.
Walmart Business+ and Spark Good for Nonprofits.pdfTechSoup
"Learn about all the ways Walmart supports nonprofit organizations.
You will hear from Liz Willett, the Head of Nonprofits, and hear about what Walmart is doing to help nonprofits, including Walmart Business and Spark Good. Walmart Business+ is a new offer for nonprofits that offers discounts and also streamlines nonprofits order and expense tracking, saving time and money.
The webinar may also give some examples on how nonprofits can best leverage Walmart Business+.
The event will cover the following::
Walmart Business + (https://business.walmart.com/plus) is a new shopping experience for nonprofits, schools, and local business customers that connects an exclusive online shopping experience to stores. Benefits include free delivery and shipping, a 'Spend Analytics” feature, special discounts, deals and tax-exempt shopping.
Special TechSoup offer for a free 180 days membership, and up to $150 in discounts on eligible orders.
Spark Good (walmart.com/sparkgood) is a charitable platform that enables nonprofits to receive donations directly from customers and associates.
Answers about how you can do more with Walmart!"
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...PECB
Denis is a dynamic and results-driven Chief Information Officer (CIO) with a distinguished career spanning information systems analysis and technical project management. With a proven track record of spearheading the design and delivery of cutting-edge Information Management solutions, he has consistently elevated business operations, streamlined reporting functions, and maximized process efficiency.
Certified as an ISO/IEC 27001: Information Security Management Systems (ISMS) Lead Implementer, Data Protection Officer, and Cyber Risks Analyst, Denis brings a heightened focus on data security, privacy, and cyber resilience to every endeavor.
His expertise extends across a diverse spectrum of reporting, database, and web development applications, underpinned by an exceptional grasp of data storage and virtualization technologies. His proficiency in application testing, database administration, and data cleansing ensures seamless execution of complex projects.
What sets Denis apart is his comprehensive understanding of Business and Systems Analysis technologies, honed through involvement in all phases of the Software Development Lifecycle (SDLC). From meticulous requirements gathering to precise analysis, innovative design, rigorous development, thorough testing, and successful implementation, he has consistently delivered exceptional results.
Throughout his career, he has taken on multifaceted roles, from leading technical project management teams to owning solutions that drive operational excellence. His conscientious and proactive approach is unwavering, whether he is working independently or collaboratively within a team. His ability to connect with colleagues on a personal level underscores his commitment to fostering a harmonious and productive workplace environment.
Date: May 29, 2024
Tags: Information Security, ISO/IEC 27001, ISO/IEC 42001, Artificial Intelligence, GDPR
-------------------------------------------------------------------------------
Find out more about ISO training and certification services
Training: ISO/IEC 27001 Information Security Management System - EN | PECB
ISO/IEC 42001 Artificial Intelligence Management System - EN | PECB
General Data Protection Regulation (GDPR) - Training Courses - EN | PECB
Webinars: https://pecb.com/webinars
Article: https://pecb.com/article
-------------------------------------------------------------------------------
For more information about PECB:
Website: https://pecb.com/
LinkedIn: https://www.linkedin.com/company/pecb/
Facebook: https://www.facebook.com/PECBInternational/
Slideshare: http://www.slideshare.net/PECBCERTIFICATION
This presentation was provided by Steph Pollock of The American Psychological Association’s Journals Program, and Damita Snow, of The American Society of Civil Engineers (ASCE), for the initial session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session One: 'Setting Expectations: a DEIA Primer,' was held June 6, 2024.
How to Setup Warehouse & Location in Odoo 17 InventoryCeline George
In this slide, we'll explore how to set up warehouses and locations in Odoo 17 Inventory. This will help us manage our stock effectively, track inventory levels, and streamline warehouse operations.
The simplified electron and muon model, Oscillating Spacetime: The Foundation...RitikBhardwaj56
Discover the Simplified Electron and Muon Model: A New Wave-Based Approach to Understanding Particles delves into a groundbreaking theory that presents electrons and muons as rotating soliton waves within oscillating spacetime. Geared towards students, researchers, and science buffs, this book breaks down complex ideas into simple explanations. It covers topics such as electron waves, temporal dynamics, and the implications of this model on particle physics. With clear illustrations and easy-to-follow explanations, readers will gain a new outlook on the universe's fundamental nature.
1. DB2 10 for z/OS Migration Planning
and Customer Experiences
Chris Crone
IBM DB2 for z/OS Development
Special Thanks to John Campbell for development of this material
3. Objectives
• Share lessons learned, surprises, pitfalls
• Provide hints and tips
• Address some myths
• Provide additional planning information
• Provide usage guidelines and positioning on new enhancements
4. Agenda
• Introduction
• Keys to customer migration success
• Performance and Scalability
• BREAK
• Migration Planning
• BIND, REBIND and EXPLAIN
• Availability
• Online Migration in 24*7 environment
• Removal of DDF Private Protocol
• Steps to investigate CPU performance regression
• Other
• Summary
5. DB2 10 for z/OS Snapshot
• Fastest uptake
– +2X customers
– +3X licenses
– 25% coming from DB2 V8
• Customers in Production
– SAP, data warehouse and OLTP
workloads
– Skip-level and V2V
• Adoption Driven by:
– Price Performance
– Virtual Storage Constraint Relief
– SAP
– Analytics
– Platform Modernization
6. Keys to customer migration success
• Plan for regular scheduled program of preventative service which is
implemented
– Need to stay more current on HIPERs at this stage in the release take up cycle
– Apply preventative service every 3 months
• Two “major” and two “minor” releases
• Refresh of the base every 6 months (“major”)
• Each base should be based on latest quarterly RSU as opposed use of PUT
• In addition, two ‘minor’ packages covering HIPERs and PEs in between time
– Augment by exploiting Enhanced HOLDDATA on at least weekly basis before
production cutover and continue thereafter
• Identify and pull all applicable HIPERs and PE fixes
• Exploit Fix Category HOLDDATA (FIXCAT keywords)
• Expedite the most critical PTFs into production
7. CST and RSU example
CST4Q10 CST1Q11
RSU1012 RSU1101 RSU1102 RSU1103
All service through end All service through end
Sept 2010 not already Dec 2010 not already
marked RSU marked RSU
+ H&PE H&PE +
H&PE through end Dec 2010 through end Jan 2011 H&PE
through end Nov 2010 through end Feb 2011
Available at the beginning of Available at the beginning of Available at the beginning of Available at the beginning of
January 2011 February 2011 March 2011 April 2011
Base: Sep 2010 Base: Sep 2010 Base: Sep 2010 Base: Dec 2010
H&PE: Nov 2010 H&PE: Dec 2010 H&PE: Jan 2011 H&PE: Feb 2011
H&PE = HIPER/Security/Integrity/Pervasive PTFs + PE resolution (and associated requisites and supersedes)
8. Keys to customer migration success ...
• Build a realistic project plan
– Avoid crash project
– Allow time at each stage for ‘soak testing’
– Allow contingency for some ‘bumps in the road’
– Involve applications teams early
• Investigation of incompatible changes and fix up
• Testing
• Performing application regression and stress testing is the best way to keep
‘fires’ away from production
– Use realistic production-like workload to certify readiness
• Perform systematic testing of release fallback toleration
• Monitor and control CPU, virtual and real storage consumption
9. Performance and Scalability
• Many opportunities for price/performance (cost reduction) improvements
– Major theme of this release
– Most welcome to our customers
• Question is how much?
– Some workloads not seeing improvements in CPU and elapsed time
– Conversely see big improvements for certain workloads
– Small workloads can skew expectations on savings
– Some measurements and quotes are insanely positive
• Should be ignored
– How to extrapolate and estimate for production mixed workload
• Estimation with accuracy and high confidence not practical
• Benchmarking effort would be required
• Very important to correctly level set performance expectations
• Do not spend any performance benefits until they see them
10. Performance and Scalability …
• Assumes no major access path regressions
• On Day 1 in production in CM without any changes (e.g., no rebind, no use of
1MB page size) there may be customers who see zero % improvement and
even some will see degradation
– Why? SPROCs disabled, puffing of run time structures for migrated packages from
V8 or V9, etc
• To maximise the performance improvements must:
– REBIND static SQL packages
– Use PGFIX=YES bufferpools with sufficient 1MB real storage page frames to 100%
fully back the requirement from PGFIX=YES bufferpools
• Seeing 0-10% improvement after REBIND and use of 1MB real storage frames
• Need to look at total CPU resource consumption picture across
– Acctg Class 2 TCB Time (Accounting Trace)
– DB2 System Address spaces (Statistics Trace)
11. Performance and Scalability …
• The 0-10% CPU reduction is based on the DB2 portion of a given application
workload
• Customer value driven on how sub-capacity workload licensing works
– Based on 4-hour rolling average MSU utilisation
– Highest rolling average figure for each month used to calculate software charges for
all MLC products (IBM and non-IBM)
– Provided DB2 forms a significant component of the total MSU usage during peak
period, any MSU savings will translate directly to MLC savings
– Typically this is the online day - mid morning and mid afternoon
– So for example - this may be driven by CICS-DB2 workload where the DB2 portion
of the workload only represents 40-60% of the total path length
– So the 0-10% may represent only 0 to 6% (i.e., needs to be discounted)
– Investigate how much CPU is used in the 4-hour period for DB2 work (SQL)
– Evaluate V10 price bands under WLC pricing vs. V10 MSU savings
– Factor in the impact on overall z/OS software stack cost reduction: z/OS, CICS, MQ
12. Performance and Scalability …
• Sub capacity pricing
Chart courtesy of Cristian Molaro, taken from White Paper: Getting the financial benefits of DB2 10 for z/OS
13. Performance and Scalability …
• Opportunities for additional price/performance improvements driven by DBM1
31-bit VSCR supported by additional real storage include
– More use of persistent threads with selective use of RELEASE(DEALLOCATE)
• High Performance DBATs
• CICS Protected ENTRY Threads
• CICS Unprotected ENTRY Threads with queuing
• Typical savings 0-10%, may be more
– Increasing MAXKEEPD to improve Local Dynamic Statement Cache hit ratio and
reduce the number of short prepares
– Sysplex/Data sharing Group consolidation
• So for example, 8-way to 4-way
• Reduced cost of data sharing
• Very important to correctly level set customer performance expectations
• Do not spend any performance benefits until they see them
14. Performance and Scalability …
• Customers should expect to see some increase in real storage resource
consumption with V10 (10-30%)
• How to make a gross level estimate for V10
– Make sure the V8/V9 system is warmed up as the memory needs to be allocated to
support normal operations
– Collect QW0225RL and QW0225AX from IFCID 225
– Subtract out the VPSIZEs for all bufferpools
– Add 30% to this remaining number
– Also add in extra space needed for increased defaults for sort and RID pools
– Must also factor an increase in MAXSPACE requirement for DB2 dumps
• Typical size needed to avoid partial dump (approx 16GB)
• Avoid very long dump capture times and bad system performance
• Critical for V10 serviceability
– Add back in the VPSIZEs for all bufferpools
15. Performance and Scalability …
Workload Customer Results
Approx. 7% CPU reduction in DB2 10 CM after REBIND, additional
CICS online transactions
reduction when 1MB page frames are used for selective buffer pools
CICS online transactions Approx 10% CPU reduction from DB2 9
CICS online transactions Approx 5% CPU reduction from DB2 V8
CICS online transactions 10+% CPU increase
Distributed Concurrent 50% DB2 elapsed time reduction, 15% chargeable CPU reduction after
Insert enabling high performance DBAT
Data sharing heavy
38% CPU reduction
concurrent insert
Queries Average CPU reduction 28% from V8 to DB2 10 NFM
Batch Overall 20-25% CPU reduction after rebind packages
15
16. Performance and Scalability …
Workload Customer Results
Multi row insert (data 33% CPU reduction from V9, 4x improvement from V8 due to LRSN
sharing) spin reduction
Parallel Index Update 30-40% Elapsed time improvement with class 2 CPU time reduction
Inline LOB SELECT LOB shows 80% CPU reduction
Include Index 17% CPU reduction in insert after using INCLUDE INDEX
20-30% CPU reduction in random access
16% CPU reduction comparing Hash Access and Index-data access.
Hash Access
5% CPU reduction comparing Hash against Index only access
Further improvements delivered late in the beta program.
16
17. Performance and Scalability …
14
• Measurements of IBM Relational
Warehouse Workload (IRWW) with 12
data sharing
– Base: DB2 9 NFM REBIND with 10
PLANMGMT EXTENDED
8
– DB2 9 NFM DB2 10 CM without
REBIND showed 1.3% CPU reduction
6
– DB2 10 CM REBIND with same access
path showed 4.8% CPU reduction
4
– DB2 10 NFM brought 5.1% CPU
reduction
2
– DB2 10 CM or NFM with RELEASE
DEALLOCATE 12.6% CPU reduction 0
from DB2 9 CM CM NFM DEALLOC
REBIND
18. Performance and Scalability …
• Query performance enhancements – no REBIND required for
– Index list prefetch
– INSERT index read I/O parallelism
– Workfile spanned records
– SQLPL performance
– High performance DBATs
– Inline LOBs (New function – requires NFM)
19. Performance and Scalability …
• Potential for access path regression when using OPTIMIZE FOR 1 ROW
– Used by customers as a hint to discourage use of sort or list prefetch
– Sometimes applied as an installation SQL coding standard
– V10 ‘hammer’ change
• Excludes the ‘sort’ access plan candidates
• Remaining ‘sort avoidance’ access plans compete on cost – lowest cost wins
• If no ‘sort avoidance’ access plans, then ‘sort’ access plans remain and compete on cost
– Evidence of access path regression when multiple candidate indexes available e.g.,
• DB2 using alternate index with lower MATCHCOLS value because there is no sort
– Solutions
• Change application to code OPTIMIZE FOR 2 ROWS
• Alter an existing index or create a new index that would support both sort avoidance and
index matching (if predicates allow)
• Set new system parameter OPT1ROWBLOCKSORT to control behavior of OPTIMIZE FOR 1
ROW
– Introduced with APAR PM56845 - OPT1ROWBLOCKSORT=DISABLE (default)
20. Performance and Scalability …
• Query performance enhancements - REBIND required to take advantage of
– Use of RELEASE(DEALLOCATE)
– Early evaluation of residual (stage 2) predicates
– IN-list improvements (new access method)
– SQL pagination (new access method)
– Query parallelism improvements
– Index include columns (New function – requires NFM)
– More aggressive view/table expression merge
– Predicate evaluation enhancements
– RID list overflow improvements
21. Performance and Scalability …
• Monitoring Virtual and Release Storage
– SPREADSHEETDD support in OMPE has not been enhanced to support V10
• OMPE are working on a 'generic' spreadsheet generator
• Outstanding requirement to also include serviceability fields
– MEMU2 and MEMUSAGE already enhanced for V10 and available on the DB2 for z/OS
Exchange community website on IBM My developerWorks
• From IBM My developerWorks My Home (sign in with your IBM login at
https://www.ibm.com/developerworks/mydeveloperworks/homepage), search 'memu2' in All My
developerWorks.
• (From DB2 for z/OS Exchange (http://www.ibm.com/developerworks/software/exchange/db2zos), click
on 'View and download examples'. The file is tagged with 'memu2'.
• To access MEMU2 directly (but note that if you want to be kept informed of updates and new versions,
you need to log on to developerWorks rather than download the file anonymously...)
– V8/V9
https://www.ibm.com/developerworks/mydeveloperworks/files/app/file/3af12254-4781-43f3-b4a8-3336e09c36df?lang=en
– V10
https://www.ibm.com/developerworks/mydeveloperworks/files/app/file/e2736ed5-0c73-4c59-b291-9da08255b941?lang=en
22. Performance and Scalability …
• Increase in DB2 system address space CPU resource consumption
– DBM1 SRB
• More use of prefetch
– Row level sequential detection and progressive prefetch
– INSERT index read I/O parallelism
– Index list prefetch when disorganised index
– After BIND, more use of list prefetch
• zIIP offload for prefetch and deferred write
• Seeing 50-70% zIIP offload achieved
– DBM1 TCB
• Closing of high use CLOSE=YES datasets when hitting DSMAX because of stale list
• See APAR PM56725 for this issue
– MSTR TCB
• Increase related to real storage monitoring which was introduced (APAR PM24723)
• DB2 is calling a z/OS RSM service for COUNTPAGES function which serialised the frame access with
spin loop
• CPU increase due to RSM spin lock contention when multiple DB2 subsystems running on the same
LPAR
• See z/OS APAR OA37821 and corresponding DB2 APAR PM49816 for this issue
23. Performance and Scalability …
• DB2 10, z196, EC12 synergy
– Taking the general case, performance improvement from V9 to V10 observed on z10 processor
should be in same range on z196 processor as long as they are measured on the same number
of processors
• Expectation is still in the 5-10% range
– Apart from MIPs improvement, z196 provides
• Higher cache hit ratio thus better scalability as number of processors per LPAR increases (more than 16
processors per LPAR)
– V10 performance on z196/EC12
• Scales better with more processors per LPAR than z10
• Can run with higher number of concurrent threads
– IBM measurement shows 20% ITR improvement from V9 (with a few benchmark specials)
compared to V10 on z196 80-way with IRWW-like workload
• Measurement is extreme case
• Will only apply to very high end customers
• Not a general message
– Why does V10 run better on z196
• Latch contention reductions, 1MB real storage page frame size, general path length
24. Performance and Scalability …
• Use of 1MB size real storage page frames on z10, z196 and EC12
– Useful commands
• DB2 -DISPLAY BUFFERPOOL(BP1) SERVICE=4
– Useful command to find out how many 1MB size page frames are being used
– Especially useful when running multiple DB2 subsystems on the same LPAR
– See DSNB999I message
• MVS DISPLAY VIRTSTOR,LFAREA
– Show total LFAREA, allocation split across 4KB and 1MB size frames, what is available
– See IAR019I message
25. Performance and Scalability …
• Use of 1MB size real storage page frames on z10, z196, and EC12
• Potential for reduced for CPU through less TLB misses
– CPU reduction based on customer experience 0 to 6%
– Buffer pools must be defined as PGFIX=YES to use 1MB size page frames
– Must have sufficient total real storage to fully back the total DB2 requirement
– Involves partitioning real storage into 4KB and 1MB size page frames
• Specified by LFAREA xx% or n in IEASYSnn parmlib member and only changeable by IPL
• 1MB size page frames are non-pageable
• If 1MB size page frames are overcommitted, DB2 will use 4KB size page frames
• Recommendation to add 5-10% to the size to allow for some growth and tuning
– Must have both enough 4KB and enough 1MB size page frames
– Do not use 1MB size real storage frames until running smoothly on V10
– Make sure any critical z/OS maintenance is applied before using 1MB size real
storage page frames
26. Performance and Scalability …
• Exceptions where CPU regression for very light OLTP transactions
– Skinny packages with few simple SQL
– Package allocation cost overrides benefit from SQL optimizations in V10
– APAR PM31614 may solve this by improving package allocation performance
– Good candidate for the use of persistent threads with RELEASE(DEALLOCATE) and
will help compensate
27. Performance and Scalability …
• DBM1 31-bit Virtual Storage Constraint Relief with 64-bit SQL run time
V7 V8 V9 V10
16 ExaByte=264
IRLM locks DDF ctrl-blks
CT/PT
Thread
SK-CT/PT +
+ Castout bfrs
Castout bfrs Castout bfrs Compression
Real storage
DBM1 address space Compression Compression DBD Cache
DBD Cache DBD Cache Global DSC
Global DSC Global DSC Ridpool
Ridpool Ridpool SK-CT/PT
Sortpool Sortpool Sortpool
2 GigaByte=231
EDMPool CT/PT Pointer
All Thread Thread
DSMAX DSMAX DSMAX
Dataspaces
CTHREAD+MAXDBAT=2000 CTHREAD+MAXDBAT=20000
practical limit ~a few hundreds ~a few thousands
28. Performance and Scalability …
• DBM1 31-bit Virtual Storage Constraint Relief with 64-bit SQL run time
– Available in CM
– Requirement to REBIND static SQL packages to accrue maximum benefit
– Very good results achieved (up to 90% VSCR)
– Have high degree of confidence that problem addressed
• Real world proposition: 500 -> 2500-3000 threads plus
– Limiting factors now on vertical scalability (# number of threads, thread storage
footprint)
• Amount of real storage provisioned on the LPAR
• ESQA/ECSA (31-bit) storage
• Log latch (LC19) contention
29. Performance and Scalability …
• DBM1 31-bit Virtual Storage Constraint Relief with 64-bit SQL run time
– Major customer opportunities here for 31-bit VSCR and improved price/performance
• Potential to reduce legacy OLTP transaction CPU cost through use of
– More CICS protected ENTRY (persistent) threads
– More use of RELEASE(DEALLOCATE) with next/existing persistent threads
• Potential to reduce CPU for DRDA transactions by using High Performance DBAT
– Must be using CMTSTAT=INACTIVE so that threads can be pooled and reused
– Packages must be bound with RELEASE(DEALLOCATE) to get reuse for same connection
– MODIFY DDF PKGREL(BNDOPT) must also be in effect
– Do not to overuse RELEASE(DEALLOCATE) on packages
• Will drive up the MAXDBAT requirement
• Potential to reduce CPU when using KEEPDYNAMIC(YES) e.g., SAP
– Increase MAXKEEPD to improve Local Dynamic Cache Hit Ratio and reduce the number of short
prepares
• Must provision additional real storage to back the requirement for each opportunity
30. Performance and Scalability …
• DBM1 31-bit Virtual Storage Constraint Relief with 64-bit SQL run time
– More persistent threads with RELEASE(DEALLOCATE) is also trade off with
BIND/REBIND and DDL concurrency
– For RELEASE(DEALLOCATE) some locks are held beyond commit until thread
termination
• Mass delete locks (SQL DELETE without WHERE clause)
• Gross level lock acquired on behalf of a SQL LOCK TABLE
• Note: no longer a problem for gross level lock acquired by lock escalation
– CICS-DB2 accounting for cost of thread create and terminate, or avoidance thereof
• CICS uses the L8 TCB to access DB2 irrespective of whether the application is thread safe
or not
• Thread create and terminate cost will clock against the L8 TCB and will be in the CICS SMF
Type 110 record
• Note: prior to OTE did not capture the thread create in the SMF Type 110
31. Performance and Scalability …
• DBM1 31-bit Virtual Storage Constraint Relief with 64-bit SQL run time
– High Performance DBATs (Hi-Perf DBATs) is a new type of distributed thread
• Must be using CMTSTAT=INACTIVE so that threads can be pooled and reused
• Packages must be bound with RELEASE(DEALLOCATE) to get reuse for same connection and -
MODIFY DDF PKGREL(BNDOPT) must also be in effect
• When a DBAT can be pooled after end of client's UOW
– Now DBAT and client connection will remain active together
• Still cut an accounting record and end the enclave
– After the Hi-Perf DBAT has been reused 200 times
• DBAT will be purged and client connection will then go inactive
– All the interactions with the client will still be the same in that if the client is part of a sysplex workload balancing
setup, it will still receive indications that the connection can be multiplexed amongst many client connections
– IDTHTOIN will not apply if the if the Hi-Perf DBAT is waiting for the next client UOW
– If Hi-Perf DBAT has not received new work for POOLINAC time
• DBAT will be purged and the connection will go inactive
– If # of Hi-Perf DBATs exceed 50% of MAXDBAT threshold
• DBATs will be pooled at commit and package resources copied/allocated as RELEASE(COMMIT)
– Hi-Perf DBATs can be purged to allow DDL, BIND, and utilities to break in
• Via -MODIFY DDF PKGREL(COMMIT)
32. Performance and Scalability …
• DBM1 31-bit Virtual Storage Constraint Relief with 64-bit SQL run time
– High Performance DBATs (Hi-Perf DBATs) should be carefully
• Want to have some high performance applications running on LUW application servers
connected to DB2 10 for z/OS running with High Performance DBATs and others not
• Standard ODBC and JDBC packages supplied with drivers/connect packages should be
bound twice into two different package collections e.g.,
– The CS package in collection 1 (e.g., NULLID) would be bound with RELEASE(COMMIT) and would
not use high performance DBATs
– The CS package in collection 2 (e.g., NULLID2) will be bound with RELEASE(DEALLOCATE) so
that the applications using that package will be eligible to use high performance DBATs
– For JDBC applications
• Set the currentPackageSet property in the respective datasource
– For .NET and ODBC / CLI applications
• Set CurrentPackageSet parameter in the db2dsdriver.cfg configuration
33. Performance and Scalability …
• DBM1 31-bit Virtual Storage Constraint Relief with 64-bit SQL run time
– Potential to reduce the number of DB2 subsystems in data sharing group
• First step is to collapse multiple DB2 members running on the same LPAR
• May then be able to reduce the number of LPARs/DB2 members
• Consider the increase in logging rate per DB2 member
– Possible aggravation of LC19 contention despite V10 improvement
• Consider the increase in SMF data volume per LPAR
– Can enable DB2 compression of SMF data to reduce SMF data volume
• Experience is that Accounting records compress ~70%
• Tiny CPU overhead at ~1%
– Re-consider use of accounting roll up for DDF and RRSAF workload (default)
• Compromises performance PD/PSI as lose information on outlying transactions
• Significant enhancements to package level accounting so it is now useful
• Consider the increased DUMPSRV and MAXSPACE requirement
– Re-emphasise the continued value of data sharing to differentiate the platform
• Support avoidance of planned outages
• Avoid humongous single points of failure
• Recommended minimum of 4-way as very best solution for true continuous availability
34. Performance and Scalability …
• 31-bit and 64-bit virtual storage contraction
– CONTSTOR=YES and MINSTOR=YES
• These existing system parameters drive the contraction of 31-bit storage pools and the best
fit allocation of 31-bit storage respectively
• Not applicable to 64-bit storage
• Not as critical as before V10
• Assuming generous DBM1 31-bit VSCR in V10 achieved, set CONTSTOR=MINSTOR=NO
– 64-bit thread pools are contracted under control of
• Commit count
• New Real Storage Management DISCARD function (see follow on slides)
35. Performance and Scalability …
• Real storage
– Need to carefully plan, provision and monitor real storage consumption
– Prior to V10 a hidden zparm SPRMRSMX (‘real storage kill switch’) existed
• SPRMRSMX prevents a runaway DB2 subsystem from taking the LPAR down
– Should be used when there is more than one DB2 subsystem running on the same LPAR
– Aim is to prevent multiple outages being caused by a single DB2 subsystem outage
– Should to set to ~1.2 to 2x normal DB2 subsystem usage depending on contribution from total bufferpool storage
requirement
– DB2 subsystem usage is DBM1 ASID usage (as reported in IFCID 225) plus MSTR ASID usage plus DIST ASID
usage
– Kills the DB2 subsystem when SPRMRSMX value reached
• With V10, will need to factor in 64-bit shared and common use to establish new footprint
– Problems with introduction of V10
• Unable to monitor the REAL and AUX storage frames used for 64-bit shared storage
– V9 not really an issue, as limited use of 64-bit shared
– But now V10 makes extensive use of 64-bit shared
• LPAR level instrumentation buckets for REAL and AUX storage use
– If more the one DB2 subsystem on the same LPAR then the numbers reported are inaccurate
– Only able to get reliable numbers if only one subsystem like DB2 on the LPAR uses 64-bit shared
• Lack of ENF 55 condition monitoring
– 50% of AUX used
36. Performance and Scalability …
• Real storage …
– DB2 APAR PM24723 is very important
• Monitoring issue is addressed and new extensions to IFCID 225 provided
– Pre-req is new MVS APAR OA35885 which provides a new callable service to RSM to provide REAL and AUX used
for addressing range for shared objects
• SPRMRSMX hidden zparm now becomes an opaque parameter REALSTORAGE_MAX
• Introduces DISCARD mode to contract storage usage to protect against excessive paging and
use of AUX
• New zparm REALSTORAGE_MANAGEMENT controls when DB2 frees storage frames back to
z/OS
– ON -> Discard unused frames all the time - discard stack, thread storage, keep footprint small
– OFF -> Do not discard unused frames unless things are getting out of hand
– AUTO (default) -> Detect whether paging is imminent and reduce the frame counts to avoid system paging
• DB2 monitors paging rates, switches between ON/OFF and decides when to start discard of
unused frames based on
– 80% of REALSTORAGE_MAX reached
– 50% of AUX (ENF55 condition) used
– Hitting AVQLOW (available real storage frame) when REALSTORAGE_MANAGEMENT=AUTO
• New messages (DSNV516I, 517I) for when paging rate thresholds cause DB2 to free real frames
• Strong recommendation to apply PTF for APAR PM24723 before going into business production
and to run with REALSTORAGE_MANAGEMENT=AUTO
37. Performance and Scalability …
• High INSERT performance …
– Reduced LRSN spin for inserts to the same page
• Works well for MRI and INSERT within loop in a data sharing environment
– Optimization for ‘pocket’ sequential insert works well
• Index manager picks the candidate RID during sequential insert (next lowest key rid)
• Higher chance to find the space and avoiding a space search
– Parallel index IO works very well when activated for random key inserts
• >= 3 indexes
• Prefetch and deferred write offload to zIIP to compensate
• Compress on INSERT
– Compression ratios almost as good compared with running REORG later
• Active log writes
– Prior to V10, log writes are done serially when re-writing partial filled CIs
– Determined that destructive writes due to IO errors no longer occur
– Now all log write IOs are done in parallel
– Elapsed time improvements
38. Performance and Scalability …
• Accounting Trace Class 3 enhancement – separate counters
– IRLM Lock/Latch waits
– DB2 Latch waits
• Data sharing
– Faster DB2 shut down by avoiding local buffer pool scan per GBP-dependent object
– Avoiding scan of XXXL local buffer pool when
• Pageset/partition transitions into GBP-dependency
• Pageset/partition transitions out of GBP-dependency
• Inline LOBs work very well if you hit the sweet spot
– Potential for significant CPU and elapsed time improvement with the right inline value
– Trade off in setting the right inline value
• Avoiding access to auxiliary tablespace
• Increasing base row size with fewer rows per page
• May have to increased page size
– Inline portion can be compressed
– Significant space savings with small LOBs (<1000 bytes)
39. Performance and Scalability …
• Hash access vs. Index only access and index lookaside
– Competes against index only access and index lookaside
• Advantage that index only access still provides for clustered data access
• Can now have unique index with INCLUDE columns
– Reduce number of indexes required for performance reasons
– Improve insert, update and delete performance
– Need to find the sweet spot
• High NLEVELS in index (>=3)
• Purely direct row access by primary key
• Truly random access
• Read intensive, not volatile
• No range queries
• Many rows per page etc
– Space allocation of fixed hash space is key to control overflow
• Too small will lead to rows in overflow
• Too large will lead to random IO
• REORG AUTOESTSPACE(YES) but still some rows in overflow
– Degraded LOAD and REORG utility performance
ftp://public.dhe.ibm.com/software/data/sw-library/db2/zos/value_DB2_hash_access_wp.pdf
40. Performance and Scalability …
• Improved index space search when index leaf page split
– In V8/V9, Index Manager
• First searches the space map page covering the splitting page
• If there is no free entry, searches the space map pages starting from the first space map page to the
highest allocated page
• If all the space map pages are full, has to extend
• If the index is huge and all the space map pages having free entries are toward the end of the index, this
process can take a very long time
– In V10, Index Manager
• After searching the space map page covering the splitting page, and if it is full, will start searching from
the page number it last remembered having the free entry in it (page A)
• When it reaches to the highest allocated page, it starts from the beginning and search forward till it
reaches to page A
• Then it has to extend since the entire index is full
• The page number of the space map page having free entry is stored in an in-memory control block
– When Index Manager finds a space map page with free entry, it is updated to be the page number of that space map
page
– Index Manager updates this value when an index page is deleted or when the index is mass deleted
• Retrofitted back to V9 via APAR PM15474
41. DB2 10 for z/OS Migration Planning
and Customer Experiences (Break)
43. Migration and Planning
• Migration process very similar to V8 and V9
– Works well with few problems with migration fallback
• Migration from either DB2 for z/OS V8 NFM or DB2 9 for z/OS NFM
• These migration fallback sequences are not valid
– V8 NFM > V10 CM8 > V8 NFM > V9 CM
– V8 NFM > V9 CM > V8 NFM > V10 CM8
• Fallback Toleration SPE
– APAR PK56922
• Early Code
– For V8/V9 APAR PK87280 (superseeds APAR PK61766)
• Information APARs
– II14474: V8 to V10
– II14477: V9 to V10
44. Migration and Planning …
• Use of V10 Early Code with V8
– It will take an IPL to originally install the V10 Early Code
– V8 Early Code does not understand –REFRESH
– However, subsequent maintenance to the V10 Early Code can be accomplished with
a -REFRESH command
• If coming from V8
– BSDS must be reformatted for larger active / archive tracking
• IPL amounts for need to be adjusted based on number of DB2 members
– 64-bit Private (1TB)
– 64-bit Shared (128GB)
– 64-bit Common (6GB)
45. Migration and Planning …
• DB2 Connect
– Minimum level
• V9.5 FP7
• V9.7 FP3A and for new functions
– Start with the latest levels based on CST/RSU and stabilise
46. Migration and Planning …
• DBRMs bound directly into plans no longer supported
– If found in V10, will trigger auto bind into packages
– For V8 and V9
• APARs PK62876/PK79925 adds new syntax to convert from DBRMs to packages
– REBIND PLAN option COLLID
– Could result in access path change
• APARs PM01821 (Version) and PM30382 (Location from * to blank) should be on
– Best to migrate DBRMs to packages before migrating to V10
– Old plans and packages bound prior to V6 will require REBIND
– Catalog and Directory must be SMS managed (EF, EA) ahead of CM
– PDSEs required for SDSNLOAD, SDSNLOD2, ADSNLOAD
– DSNHDECP NEWFUN=V10|V9|V8
47. BIND, REBIND and EXPLAIN
• Value of REBIND under V10
– Improved performance from new run time (avoid puffing, enable SPROC)
– Maximize DBM1 31-bit VSCR
– Allow RID overflow to workfile
– Take advantage of query optimization changes (available in CM mode)
– Reduce exposure to problems with migrated packages from earlier releases
• INCORROUTs
• Thread abends
• Can mitigate exposure to bad access path change introduced with REBIND
which leads to degraded run time performance (regression)
– Use access plan stability (PLANMGMT=EXTENDED|BASIC) and fallback if needed
• PLANMGMT=EXTENDED is now the default
– Use APREUSE and APCOMPARE
48. BIND, REBIND and EXPLAIN …
• RUNSTATS/REBIND recommendations based upon on scenario
– V8 preparation
• If RUNSTATS will be difficult on large number of objects immediately after migration to V9/10, then
REORG and/or RUNSTATS (V8) immediately prior to migration can reduce RUNSTATS need on V9/10 -
as RUNSTATS INDEX under V10 can be sufficient to capture new CR/DRF
– V9 migration
• RUNSTATS objects as soon as possible after migration
• Target dynamic applications first as these are exposed to new access paths immediately
• Delay static REBINDs until associated objects have RUNSTATS run
– V8->V10 migration
• RUNSTATS objects as soon as possible after migration
• Target dynamic applications first as these are exposed to new access paths immediately
• Equal priority - target static parallelism packages to REBIND to avoid incremental bind at each execution
• Delay non-parallelism REBINDs until associated objects have RUNSTATS run
– V9->V10 migration
• REBIND static parallelism packages as soon as possible to avoid incremental bind at each execution
• Delay non-parallelism REBINDs until associated objects have RUNSTATS run
• BIND/REBIND options APREUSE/APCOMPARE are available on V10 for packages bound on V9
49. BIND, REBIND and EXPLAIN …
• RUNSTATS/REBIND recommendations based upon on scenario …
– V9/10 co-existence
• Set ABIND=COEXIST while in co-existence with V8
• What to do with static parallel queries?
– Accept incremental bind whenever executed on V10 member
– OR, REBIND with DEGREE('1') to disable parallelism while in co-existence
• Follow V8-V10 migration steps after all members are V10, including resetting the following
zparms
• Set ABIND=YES
50. BIND, REBIND and EXPLAIN …
• Single thread BIND/REBIND performance
– Degraded CPU and elapsed time performance on entry to CM
• PLANMGMT=EXTENDED is now default
• New indexes defined for post ENFM when hash links are eliminated
• Change in access path (index access) on entry to CM
• No concurrency improvement until after Catalog restructure in ENFM
• Concurrent BIND/REBIND performance
– Problems addressed
• Performance problems related to DELETE/INSERT process
• Space growth in SPT01 for both LOB space and base table
– Now working well
• Inefficient space search for out of line LOB in data sharing (APAR PM24721)
• Inline LOB with compression for SPT01 to address SPT01 growth (APAR PM27073)
• More efficient space reuse for LOB tablespace (APAR PM64226)
– Recommendations
• Customers need to change existing procedures to go parallel
• But cannot do this until post ENFM
• Benefit from reducing application down time to implement new application releases
51. BIND, REBIND and EXPLAIN …
• EXPLAIN tables
– Format and CCSID from previous releases is deprecated in V10
• Cannot use pre V8 format
– SQLCODE -20008
• V8 or V9 format
– Warning SQLCODE +20520 regardless of CCSID EBCDIC or UNICODE
• Must not use CCSID EBCDIC with V10 format
– EXPLAIN fails with RC=8 DSNT408I SQLCODE = -878
– BIND with EXPLAIN fails with RC=8 DSNX200I
– Recommendations
• Use CCSID UNICODE in all supported releases (V8, V9, V10) due to problems with character truncation and
conversion etc
• Use the V10 extended column format with CCSID UNICODE when
– Applications access EXPLAIN tables and can only tolerate SQLCODE 0 or +100
• V10 column format is supported under V8 and V9 with the SPE fallback APAR PK85956 applied with the
exception of
– DSN_STATEMENT_CACHE_TABLE due to the BIGINT columns
– APAR PK85068 can help migrate V8 or V9 format to the new V10 format with CCSID
UNICODE
52. Availability
• Online Schema Evolution (‘Deferred Alter’)
– Migrate from classic table space types (simple, segmented, partitioned) to UTS
PBG/PBR
• One way ticket only
– UTS is pre-requisite for Cloned Table, Hash, Inline LOB, Currently Committed
– Once migrated to UTS PBG/PBR can change attributes in both directions
• DSSIZE, index page size, MEMBER CLUSTER, Hash Access, …
– Benefits
• Streamed line way to move to UTS
• Reduce administrative time and cost
• Cuts down on errors
• Reduce outages
– Issue that PIT recovery to point before successful materializing REORG not possible
• Incorrect results from REORG
• Application change rollback
53. Availability …
• Restart Light enhancement
– LBACKOUT will now be honoured
• LBACKOUT=YES|AUTO will cause postponed abort (PA) URs to be created
• Restart will complete
• DB2 will shut down
• Retained locks will be kept on behalf of PA URs
– Controlled via new system parameter
– Also retrofitted back to V9 via APAR
• Online REORG with FORCE
– Only running threads which are blocking are cancelled
– Threads which are suspended / inactive will cause REORG to still fail
• Online REORG LOB with DISCARD
– Cannot handle LOB columns greater than 32KB
54. Online Migration in 24*7 environment
• Technically possible to run DSNTIJTC and DSNTIJEN alongside well behaved
online workloads
– Jobs use SQL DDL with frequent commit and REORG SHRLEVEL(REFERENCE)
– Designed to fail gracefully leaving DB2 catalog fully operational
– After problem determination is complete, the respective job can be corrected and
resubmitted
– The respective job will restart from where it left off
55. Online Migration in 24*7 environment …
• But some ‘rules of the game’ and you must be prepared to play
– DSNTIJTC and DSNTIJEN jobs should be scheduled during a relative quiet period
– If non data sharing
• Must stop all application workload when DSNTIJTC job is running
– If data sharing
• Must route work away from the DB2 member where DSNTIJTC job is running
• Must temporarily change workload balancing and sysplex routing scheme
– Should synthetically stop all of the following workload types from running
• SQL DDL, Grants & Revokes, BIND/REBIND, utilities, monitors
– All essential business critical workloads that are running should commit frequently
– Must be prepared to watch and intervene if needed
– Strong recommendation to perform Pre-Migration Catalog Migration Testing
– Must be prepared for DSNTIJTC and/or DSNTIJEN jobs to possibly fail or for some
business transactions to fail
56. Online Migration in 24*7 environment …
• Some critical maintenance
– APAR PM62572
• Undetected lock contention failure during the switch phase of the ENFM REORG step
– APAR PM58575
• Autobind triggers deadlock with RTS
• If not prepared to play by the ‘rules of the game’ then take the outage
– Quiesce all applications
– Run DSNTIJTC or DSNTIJEN job with DB2 started ACCESS(MAINT)
57. Removal of DDF Private Protocol
• Must absolutely eliminate all use of DDF Private Protocol before migrating
– No longer supported In V10
– Any local packages miss tagged with DDF Private Protocol will be tolerated
– Otherwise package must exist in both local and remote sites
– A lot of packages and plans are bound with DBPROTOCOL(PRIVATE) because this
was the default (zparm DBPROTCL) when introduced in DB2 V6
• DSNT226I is issued if DBPROTOCOL(PRIVATE) is used during REBIND
• See Reference Material for additional information on removing Private Protocol
58. Steps to investigate CPU performance regression
• Comparing CPU performance on V10 relative to V8 or V9
– More difficult to do in real customer production environment
• Uncertainty caused by application changes
• Fluctuation in the daily application profile especially batch flow
– Must try to normalise things out to ensure workloads are broadly comparable
• Broadly similar in terms of SQL and getpage profile
• Usually have to exclude the batch flow
• Factor out extreme variation
• Need to look at multiple data points
59. Steps to investigate CPU performance regression …
• Check that you have the same pattern across releases from a DB2 perspective
based on combined view of DB2 Statistics and Accounting Traces
• Validate that there have been no access path regression after migration or from
application changes going on at the same time as the migration
• Use as a starting point look at
– Statistics Trace
• MSTR TCB & SRB, DBM1 TCB, SRB & IIP SRB, IRLM TCB & SRB CPU times
• Split of CP vs. zIIP for DBM1 is likely to be very different between V9 and V10
– Accounting
• For each CONNTYPE
– Class 2 CPU times on CP and zIIP, numbers of occurrences and commits/rollbacks
– Workload indicators:
• DML (split by type: select, insert, update, fetch, etc...),
• Commits, rollbacks, getpages, buffer update
• Read and write activity (#IOs. #pages)
60. Steps to investigate CPU performance regression …
• A challenge to get an 'apple-to-apple' comparison in a real production
environment
• Best chance is to find a period of time with limited batch activity, and to look at
the same period over several days in V8/V9 and several days running on V10
• Make sure that the CPU numbers are normalized across those intervals i.e., use
CPU milliseconds per commit
• Easy to combine statistics and accounting by stacking the various components of
CPU resource consumption:
– MSTR TCB / (commits + rollbacks)
– MSTR SRB / (commits + rollbacks)
– DBM1 TCB / (commits + rollbacks)
– DBM1 SRB / (commits + rollbacks)
– DBM1 IIP SRB / (commits + rollbacks)
– IRLM TCB / (commits + rollbacks)
– IRLM SRB / (commits + rollbacks)
– Average Class 2 CP CPU * occurrences / (commits + rollbacks)
– Average Class 2 SE CPU * occurrences / (commits + rollbacks)
61. Steps to investigate CPU performance regression …
• Need to check the workload indicators for the chosen periods
• Similarities between data points for a given version, but big variations between
V8/V9 and V10
– Sign that something has changed from an application or access path perspective
– More granular analysis of accounting data will be required to pin point the specific
plan/package
62. Other
• Ability to create classic partitioned table space (PTS)
– Classic PTS deprecated in V10
– By default will be created as UTS PBR
– UTS will only support table based controlled partition syntax
– Options to be able to officially create classic PTS
• Specify SEGSIZE 0 on CREATE TABLESPACE
• Set new zparm DPSEGSZ=0 (default 32)
• Fast Log Apply storage
– System parameter LOGAPSTG is eliminated
– Total FLA storage is now 510MB
• Old COBOL and PL/1
– V7 lookalike pre-compiler (DSNHPC7) for older COBOL and PL/I is still provided
• DDL Concurrency after Catalog restructure
– Some help provided but concurrency issues not absolutely solved
– Still deadlocks with parallel heavy DDL against different databases
63. Other …
• SPT01 compression is back
– Via system parameter COMPRESS_SPT01=YES (default is NO)
– Followed by REORG of SPT01 as part of ENFM process or later
• Statistics Interval
– IFCIDs 2, 202, 217, 225, 230 are always cut at a fixed 1 minute interval
– Remember to normalise the data when comparing V8/V9 vs. V10
– Only the frequency of IFCIDs 105, 106, 199 are controlled via system parameter
STATIME
– Consider setting STATIME to 5 minutes now to reduce SMF data volume
64. Other …
• RUNSTATS
– Page Sampling
• Performance improvement can be phenomenal
• Potential issues with accuracy because error rates increase as the sample size decreases
• No sampling done on indexes
– zIIP offload
• Nearly all RUNSTATS INDEX processing is offloaded, but only ‘basic’ RUNSTATS TABLE
processing is offloaded
• Much less for advanced/complex statistics
– Not supported:
• Inline stats
• COLGROUP
• DSTATS
• Histogram stats
66. Other …
• CHAR(dec), VARCHAR(dec), CAST(dec as CHAR), CAST(dec as VARCHAR)
– APAR PM29124 restores compatible behavior for CHAR(dec)
• Set new system parameter BIF_COMPATIBILITY=V9 (default)
• Get the old behavior for the CHAR(dec) function
• But new V10 behavior for VARCHAR/CAST
• New IFCID 366 to identify potential applications exposed
– APAR PM66095 and PM65722
• BIF_COMPATIBILITY=V9 will continue to provide the old behavior for the CHAR(dec)
• BIF_COMPATIBILITY=V9_DECIMAL_VARCHAR will get the old behavior for VARCHAR
and CAST also
• BIF_COMPATIBILITY=CURRENT provides the new behavior for all the built in functions
– APAR PM70455 will retrofit IFCID 366 to Version 9
67. Summary
• Very good release in terms of the opportunities for price/performance and
scalability improvement
– Significant DBM1 31-bit VSCR after rebind
– Use long term page fixed buffer pools
• Exploit 1MB real storage page frames on z10 and z196
– Reduced latch contention, log manager improvements, etc
– Opportunity for further price performance improvements
• More use of persistent threads
– CICS, IMS/TM, High Performance DBATs
• More use of RELEASE(DEALLOCATE) with persistent threads
• More use of RELEASE(DEALLOCATE) is a trade off
– Increased storage consumption
• Need to plan on additional real memory
– Reduced concurrency
• BIND/REBIND and DDL
• Increase MAXKEEPD to reduce short prepares for dynamic SQL
– Opportunity for scale up and LPAR/DB2 consolidation
68. Summary …
• Carefully plan, provision and monitor real storage consumption
• Any customer migrating from either V8 or V9 to V10 should make a solid plan,
take extra care to mitigate against the risk and set themselves up for success
– Regular full ‘major’ maintenance drops
– Exploitation of CST/RSU recommended maintenance
– Augment by regular use of Enhanced HOLDDATA
– Perform application regression and stress testing to keep ‘fires’ away from production
– Plan should allow some contingency for some ‘bumps in the road’
69. Chris Crone
IBM DB2 for z/OS Development
cjc@us.ibm.com
DB2 10 for z/OS Migration Planning and Experiences
71. Capturing documentation for IBM
• Methods for capturing documentation for all releases is documented here
– https://www.ibm.com/support/docview.wss?uid=swg21206998
– OSC and DB2PLI8 do not support DB2 10
• SYSPROC.ADMIN_INFO_SQL supports V8 -> V10 (Required)
– Excellent developerWorks article here:
• http://www.ibm.com/developerworks/data/library/techarticle/dm-1012capturequery/index.html
– It is installed in V10 base and is subject to the installation verification process
• DB2HLQ.SDSNSAMP(DSNTESR) will create and bind it
• calling program is DSNADMSB, and sample JCL in DSNTEJ6I
• Ensure DB2 9 and DB2 10 have APAR PM39871 applied
• Data Studio V3.1 incorporates this procedure into a GUI (Best Practice)
– http://www.ibm.com/developerworks/downloads/im/data/
• No charge product, replacement for OSC and Visual Explain
• Several versions:
– DBA’s should download the Administration Client
• Incorporates Statistics Advisor
• FTP doc directly to DB2 Level 2
• Can be used to duplicate stats in TEST environment
72. Security considerations when removing DDF Private Protocol
• This section also applies to customers using DRDA exclusively
• There are fundamental differences on how authorization is performed based on the
distributed protocol used
• Private Protocol (DB2 for z/OS requester)
– Supports static SQL statements only
– Plan owner must have authorization to execute all SQL executed on the DB2 server
– Plan owner authenticated on DB2 requester and not on the DB2 server
• DRDA Protocol
– Supports both static and dynamic SQL statements
– Primary auth ID and associated secondary auth IDs must have authorization to execute
package and dynamic SQL on the DB2 server
– Primary auth ID authenticated and secondary auth IDs are associated on DB2 server
• Prior to V10, Private Protocol and DRDA Protocol can be used by same application
– Private Protocol security semantics was used due to possible inconsistent behavior which is
dependent on how programs are coded and executed
73. Security considerations when removing DDF Private Protocol …
• But there is also a difference prior to V10 in the authorizations required by an incoming
DRDA connection at the DB2 for z/OS server, depending on where the connection come
from:
– Dynamic SQL DRDA connection from DB2 Connect and/or DB2 client direct connection
• Connecting userid needs authority to run the appropriate DB2 package and authority to access the DB2
table
– Dynamic SQL DRDA connection from DB2 for z/OS requester
• Connecting userid needs authority to access the DB2 table
• Originating plan owner needs authority to run the appropriate DB2 package
• It is different for DB2 for z/OS requester to DB2 for z/OS server because connections
were designed to use Private Protocol (PP) semantics to avoid changing authids when
switching between PP to DRDA Protocol
• With the disappearance of PP in V10, DB2 have decided to bring the DRDA connection
from DB2 for z/OS requester to DB2 for z/OS server in line with other DRDA requesters
and to change the authorizations required
– This was retrofitted back into V8 and V9 with APAR PM17665
– It is very important to distinguish clearly between the behavior of DRDA before and after APAR
PM17665
74. Security considerations when removing DDF Private Protocol …
• APAR PK92339 introduced new zparm PRIVATE_PROTOCOL=YES|NO
– To prevent future introduction of PP then set PRIVATE_PROTOCOL=NO
• The result of migrating to V10 or the introduction of APAR PM17665 under V8 or V9,
when running with PRIVATE_PROTOCOL=NO introduces the authorization changes at
the DB2 for z/OS server for DRDA connections coming from DB2 for z/OS requester
– PP security semantics are no longer used as default for access from a DB2 for z/OS requester
– Plan owner value is ignored and connecting userid must be granted authority to execute the
package at the remote site
– Otherwise the connection will fail with SQLCODE -551
• As a result of customer complaints, APAR PM37300 introduces
PRIVATE_PROTOCOL=AUTH which allows an installation to
– Disable PP but keep the plan owner authorization check (the “private protocol semantics”)
• Migration to V10 or the application of PTF for APAR PM17665 does affect you even if
you have everything already bound as DRDA
75. Security considerations when removing DDF Private Protocol …
• In summary
– Before disabling private protocol, ensure all appropriate grants are performed
• Grant execute privilege to any user who plans to run a package or stored procedure
package from a DB2 for z/OS requester, just like other DRDA clients
– DB2 V8 and V9 can disable private protocol but still maintain private protocol
authorization checks by
• Setting system parameter PRIVATE_PROTOCOL=AUTH
– DB2 10 does not support private protocol but can allow private protocol authorization
checks for use of DRDA protocol for DB2 for z/OS requesters by
• Setting system parameter PRIVATE_PROTOCOL=AUTH