The third in the Migration Month webcast series looking at DB2 10 migration planning. This webcast goes into the scalability benefits available in DB2 10, with Julian Stuhler of Triton Consulting & Jeff Josten of IBM.
MongoDB 101 & Beyond: Get Started in MongoDB 3.0, Preview 3.2 & Demo of Ops M...MongoDB
This document summarizes new features in MongoDB versions 3.0, 3.2 and how Ops Manager can help manage MongoDB deployments. Key points include:
- MongoDB 3.0 introduces pluggable storage engines like WiredTiger which offers improved write performance over MMAPv1 through document-level concurrency and built-in compression.
- Ops Manager provides automation for tasks like zero downtime cluster upgrades, ensuring availability and best practices. It reduces management overhead.
- MongoDB 3.2 features include faster failovers, support for more data centers, new aggregation stages, encryption at rest, partial and document level validation indexes.
- Compass is a new GUI for visualizing data and performing common operations
Defrag.NSF+ is a Domino-specific database defragmentation tool that runs as a Domino server task. It intelligently auto-switches between file-level and volume-level defragmentation. Key features include automatic scheduling and tagging of databases for defragmentation, analyzing and reducing database fragmentation, and automated maintenance of system databases. Regular defragmentation with Defrag.NSF+ can significantly speed up backup times and improve database performance.
Hardware planning & sizing for sql serverDavide Mauri
This document provides an overview of hardware planning and sizing considerations for SQL Server. It discusses that performance is the typical requirement for relational database management systems. While high performance is expected, typical server hardware configurations often result in unbalanced systems that are not optimized. The document advocates for balanced systems with no single bottleneck. It provides guidance on evaluating CPU, memory, I/O capabilities and storage to ensure a system can handle peak resource consumption. Baseline testing is recommended to compare hardware performance.
This document summarizes 11 cool features of Defrag.NSF+ v11, a Domino-specific database defragmentation product. It provides automatic scheduling and tagging of databases for defragmentation. It intelligently switches between file and volume defragmentation and analyzes and consolidates freespace to reduce fragmentation. It also includes automated maintenance of system databases and reporting on database health and optimization.
The document discusses tuning MySQL server settings for performance. Some key points covered include:
- Settings are workload-specific and depend on factors like storage engine, OS, hardware. Tuning involves getting a few settings right rather than maximizing all settings.
- Monitoring tools like SHOW STATUS, SHOW INNODB STATUS, and OS tools can help evaluate performance and identify tuning opportunities.
- Memory allocation and settings like innodb_buffer_pool_size, key_buffer_size, query_cache_size are important to configure based on the workload and available memory.
The document discusses PostgreSQL high availability and scaling options. It covers horizontal scaling using load balancing and data partitioning across multiple servers. It also covers high availability techniques like master-slave replication, warm standby servers with point-in-time recovery, and using a heartbeat to prevent multiple servers from becoming a master. The document recommends an initial architecture with two servers using warm standby and point-in-time recovery with a heartbeat for high availability. It suggests scaling the application servers horizontally later on if more capacity is needed.
MongoDB 101 & Beyond: Get Started in MongoDB 3.0, Preview 3.2 & Demo of Ops M...MongoDB
This document summarizes new features in MongoDB versions 3.0, 3.2 and how Ops Manager can help manage MongoDB deployments. Key points include:
- MongoDB 3.0 introduces pluggable storage engines like WiredTiger which offers improved write performance over MMAPv1 through document-level concurrency and built-in compression.
- Ops Manager provides automation for tasks like zero downtime cluster upgrades, ensuring availability and best practices. It reduces management overhead.
- MongoDB 3.2 features include faster failovers, support for more data centers, new aggregation stages, encryption at rest, partial and document level validation indexes.
- Compass is a new GUI for visualizing data and performing common operations
Defrag.NSF+ is a Domino-specific database defragmentation tool that runs as a Domino server task. It intelligently auto-switches between file-level and volume-level defragmentation. Key features include automatic scheduling and tagging of databases for defragmentation, analyzing and reducing database fragmentation, and automated maintenance of system databases. Regular defragmentation with Defrag.NSF+ can significantly speed up backup times and improve database performance.
Hardware planning & sizing for sql serverDavide Mauri
This document provides an overview of hardware planning and sizing considerations for SQL Server. It discusses that performance is the typical requirement for relational database management systems. While high performance is expected, typical server hardware configurations often result in unbalanced systems that are not optimized. The document advocates for balanced systems with no single bottleneck. It provides guidance on evaluating CPU, memory, I/O capabilities and storage to ensure a system can handle peak resource consumption. Baseline testing is recommended to compare hardware performance.
This document summarizes 11 cool features of Defrag.NSF+ v11, a Domino-specific database defragmentation product. It provides automatic scheduling and tagging of databases for defragmentation. It intelligently switches between file and volume defragmentation and analyzes and consolidates freespace to reduce fragmentation. It also includes automated maintenance of system databases and reporting on database health and optimization.
The document discusses tuning MySQL server settings for performance. Some key points covered include:
- Settings are workload-specific and depend on factors like storage engine, OS, hardware. Tuning involves getting a few settings right rather than maximizing all settings.
- Monitoring tools like SHOW STATUS, SHOW INNODB STATUS, and OS tools can help evaluate performance and identify tuning opportunities.
- Memory allocation and settings like innodb_buffer_pool_size, key_buffer_size, query_cache_size are important to configure based on the workload and available memory.
The document discusses PostgreSQL high availability and scaling options. It covers horizontal scaling using load balancing and data partitioning across multiple servers. It also covers high availability techniques like master-slave replication, warm standby servers with point-in-time recovery, and using a heartbeat to prevent multiple servers from becoming a master. The document recommends an initial architecture with two servers using warm standby and point-in-time recovery with a heartbeat for high availability. It suggests scaling the application servers horizontally later on if more capacity is needed.
Best Practices of HA and Replication of PostgreSQL in Virtualized EnvironmentsJignesh Shah
This document discusses best practices for high availability (HA) and replication of PostgreSQL databases in virtualized environments. It covers enterprise needs for HA, technologies like VMware HA and replication that can provide HA, and deployment blueprints for HA, read scaling, and disaster recovery within and across datacenters. The document also discusses PostgreSQL's different replication modes and how they can be used for HA, read scaling, and disaster recovery.
VMworld 2014: Advanced SQL Server on vSphere Techniques and Best PracticesVMworld
This document provides an overview of advanced SQL Server techniques and best practices when running SQL Server in a virtualized environment on vSphere. It covers topics such as storage configuration including VMFS, block alignment, and I/O profiling. Networking techniques like jumbo frames and guest tuning are discussed. The document also reviews memory management and optimization, CPU sizing considerations, workload consolidation strategies, and high availability options for SQL Server on vSphere.
Right-Sizing your SQL Server Virtual Machineheraflux
This document discusses "right-sizing" a SQL Server virtual machine (VM) by properly allocating CPU, memory, and storage resources. It explains that one size does not fit all workloads and inappropriate allocations can hurt performance. The presenter recommends profiling systems by collecting metrics from all stack components, analyzing workloads, and adjusting VM configurations based on the data. Regular reviews are also advised as workloads change. A new free beta tool is announced that will automate estimating the right-sized resource assignment for a SQL Server VM.
Session from NCUG. Stockholm 12.06.2019.
Basic Domino Performance Tuning. Ideas how to improve performance, statistics how to get information that we have issues and how to fix them
The document discusses using the TPC-C benchmark to study Firebird database performance under load. It describes running tests with different Firebird configurations, hardware, and database sizes to determine optimal settings. Analysis found page size, buffer size, and hash slots impact performance, but settings optimized for HDDs did not always help SSD performance which responded differently. The tests provided valuable insights into Firebird performance tuning but also showed more analysis is needed to optimize configurations for different hardware.
SQL Server 2014 New Features (Sql Server 2014 Yenilikleri)BT Akademi
The document summarizes several topics discussed by Ismail Adar including buffer pool extension, resource governor for I/O, delayed durability, DMV sys.dm_exec_query_profiles, and selecting into parallel. Buffer pool extension allows using SSD storage to increase the amount of memory available for the buffer pool. Resource governor for I/O provides I/O level isolation between workloads. Delayed durability controls the durability of transactions. The DMV sys.dm_exec_query_profiles profiles query execution. Selecting into allows inserting results of a query in parallel into a table.
Технологии работы с дисковыми хранилищами и файловыми системами Windows Serve...Виталий Стародубцев
##Что такое Storage Replica
##Архитектура и сценарии
##Синхронная и асинхронная репликация
##Междисковая, межсерверная, внутрикластерная и межкластерная репликация
##Дизайн и проектирование Storage Replica
##Нововведения в Windows Server 2016 TP5
##Графический интерфейс управления, и другие возможности - демонстрация и планы развития
##Интеграция Storage Replica с Storage Spaces Direct
SQL In The City - Understanding and Controlling Transaction Logs by Nigel Peter Sammy.
- Relational DBMS Basics
- Introduction to Transaction Logs
- The Architecture
- Recovery Models
- Managing the Transaction Logs
- Red Gate Tools
Asynchronous cascading master to multiple replicas
Asynchronous multi-master
Can be used for:
Improved performance for geographically dispersed users
High availability
Load distribution (OLTP vs. reporting)
A presentation on best practices for J2EE scalability from requirements gathering through to implementation, including design and architecture along the way.
План вебинара:
##Что такое Storage Spaces Direct?
##Сценарии использования Storage Spaces.
##Описание минимальных требований для Storage Spaces.
##Как настроить Windows Server 2016 Spaces Direct для работы с локальными дисками сервера?
##Что такое Storage Replica?
##Разница подходов синхронной и асинхронной репликации.
##Какие технологии репликации для каких задач использовать (DFS-R, Hyper-V Repica, SQL AlwaysOn, Exchange DAG) - и как это комбинируется с новыми возможностями Windows Server 2016?
##Что такое ReFS и чем она отличается в Server 2016 от предыдущих изданий ОС?
##Что даёт использование ReFS для виртуальных машин Hyper-V. Сценарии и возможности.
##Общие изменения Storage технологий в Windows Server 2016.
Storage and performance- Batch processing, WhiptailInternet World
Batch processing allows jobs to run without manual intervention by shifting processing to less busy times. It avoids idling computing resources and allows higher overall utilization. Batch processing provides benefits like prioritizing batch and interactive work. The document then discusses different approaches to batch processing like dedicating all resources to it or sharing resources. It outlines challenges like systems being unavailable during batch processing. The rest of the document summarizes Whiptail's flash storage solutions for accelerating workloads and reducing costs and resources compared to HDDs.
This document discusses best practices for virtualizing databases. It begins with an introduction of the presenters, Michael Corey and Jeff Szastak, who are experts in virtualizing Oracle and SQL Server databases. The document then covers reasons for virtualizing databases, including flexibility, efficiency of resources, and cost savings. It provides examples of large production databases that have been successfully virtualized. The document discusses performance results from testing that show virtualized database performance is typically within 5% of physical performance. It provides recommendations for right-sizing resources and avoiding configurations like BIOS settings that could negatively impact performance. The overall message is that databases can be successfully virtualized while meeting service level agreements by following best practices.
Ibm spectrum scale fundamentals workshop for americas part 2 IBM Spectrum Sca...xKinAnx
This document discusses quorum nodes in Spectrum Scale clusters and recovery from failures. It describes how quorum nodes determine the active cluster and prevent partitioning. The document outlines best practices for quorum nodes and provides steps to recover from loss of a quorum node majority or failure of the primary and secondary configuration servers.
GlusterFS 3.3 includes several new features:
- Granular locking allows block-by-block healing when servers fail and return.
- Replication improvements include proactive self-healing where recovered nodes query peers and a synchronous translator API.
- Unified file and object storage (UFO) provides S3- and Swift-style object access via HTTP or Gluster mounts.
- HDFS compatibility allows running MapReduce jobs on GlusterFS and adding unstructured data to Hadoop.
Are you unsure of the steps needed to get your Continuent Tungsten cluster up-and-running? In this live virtual course, we will teach you how to get from a single database server to a scalable cluster, or from a brittle MySQL replication system to a transparent, manageable Tungsten cluster.
We will discuss the benefits of leveraging Continuent Tungsten clustering with MySQL, and walk you through the steps to implement a Tungsten cluster in Amazon EC2. We'll cover the prerequisites, installing and configuring Tungsten, and best practices that are part of most production installations and proof-of-concepts.
Course Topics:
- Configuring MySQL and the OS for proper installation
- Installing a cross-site cluster
- Schema upgrade on the master database server with minimal application downtime (switch operation)
- Automated failover when a MySQL database server crashes
- Recovery of a failed master to a fully operational slave with a single command (recover operation)
- Switching database operations to a remote site (geo-clustering, cross-site 'switch' operation)
We will also discuss and demonstrate basic operations, such as adding and removing a cluster node, basic monitoring and troubleshooting, and discuss the basic failure scenarios.
Learn how to quickly configure and provision highly optimized Continuent Tungsten deployments in the cloud or on-premises.
This document outlines the key concepts of Google's Bigtable distributed database system. It discusses Bigtable's data model, APIs, implementation details including its use of GFS and Chubby, refinements to improve performance, and lessons learned. The document poses many questions about Bigtable's design and implementation for further discussion.
Bank Data Frank Peterson DB2 10-Early_Experiences_pdfSurekha Parekh
DB2 for z/OS update seminar focused on Bankdata's experiences testing DB2 10 during the beta process. Key items tested included hash access to data, XML engine schema validation, XML multi-versioning, and other new features. Testing revealed surprises around administrative overhead and challenges completing performance tests. Results showed hash access provided CPU savings compared to non-hash access when data is relatively static. XML schema validation was moved to the engine for improved performance.
Best Practices of HA and Replication of PostgreSQL in Virtualized EnvironmentsJignesh Shah
This document discusses best practices for high availability (HA) and replication of PostgreSQL databases in virtualized environments. It covers enterprise needs for HA, technologies like VMware HA and replication that can provide HA, and deployment blueprints for HA, read scaling, and disaster recovery within and across datacenters. The document also discusses PostgreSQL's different replication modes and how they can be used for HA, read scaling, and disaster recovery.
VMworld 2014: Advanced SQL Server on vSphere Techniques and Best PracticesVMworld
This document provides an overview of advanced SQL Server techniques and best practices when running SQL Server in a virtualized environment on vSphere. It covers topics such as storage configuration including VMFS, block alignment, and I/O profiling. Networking techniques like jumbo frames and guest tuning are discussed. The document also reviews memory management and optimization, CPU sizing considerations, workload consolidation strategies, and high availability options for SQL Server on vSphere.
Right-Sizing your SQL Server Virtual Machineheraflux
This document discusses "right-sizing" a SQL Server virtual machine (VM) by properly allocating CPU, memory, and storage resources. It explains that one size does not fit all workloads and inappropriate allocations can hurt performance. The presenter recommends profiling systems by collecting metrics from all stack components, analyzing workloads, and adjusting VM configurations based on the data. Regular reviews are also advised as workloads change. A new free beta tool is announced that will automate estimating the right-sized resource assignment for a SQL Server VM.
Session from NCUG. Stockholm 12.06.2019.
Basic Domino Performance Tuning. Ideas how to improve performance, statistics how to get information that we have issues and how to fix them
The document discusses using the TPC-C benchmark to study Firebird database performance under load. It describes running tests with different Firebird configurations, hardware, and database sizes to determine optimal settings. Analysis found page size, buffer size, and hash slots impact performance, but settings optimized for HDDs did not always help SSD performance which responded differently. The tests provided valuable insights into Firebird performance tuning but also showed more analysis is needed to optimize configurations for different hardware.
SQL Server 2014 New Features (Sql Server 2014 Yenilikleri)BT Akademi
The document summarizes several topics discussed by Ismail Adar including buffer pool extension, resource governor for I/O, delayed durability, DMV sys.dm_exec_query_profiles, and selecting into parallel. Buffer pool extension allows using SSD storage to increase the amount of memory available for the buffer pool. Resource governor for I/O provides I/O level isolation between workloads. Delayed durability controls the durability of transactions. The DMV sys.dm_exec_query_profiles profiles query execution. Selecting into allows inserting results of a query in parallel into a table.
Технологии работы с дисковыми хранилищами и файловыми системами Windows Serve...Виталий Стародубцев
##Что такое Storage Replica
##Архитектура и сценарии
##Синхронная и асинхронная репликация
##Междисковая, межсерверная, внутрикластерная и межкластерная репликация
##Дизайн и проектирование Storage Replica
##Нововведения в Windows Server 2016 TP5
##Графический интерфейс управления, и другие возможности - демонстрация и планы развития
##Интеграция Storage Replica с Storage Spaces Direct
SQL In The City - Understanding and Controlling Transaction Logs by Nigel Peter Sammy.
- Relational DBMS Basics
- Introduction to Transaction Logs
- The Architecture
- Recovery Models
- Managing the Transaction Logs
- Red Gate Tools
Asynchronous cascading master to multiple replicas
Asynchronous multi-master
Can be used for:
Improved performance for geographically dispersed users
High availability
Load distribution (OLTP vs. reporting)
A presentation on best practices for J2EE scalability from requirements gathering through to implementation, including design and architecture along the way.
План вебинара:
##Что такое Storage Spaces Direct?
##Сценарии использования Storage Spaces.
##Описание минимальных требований для Storage Spaces.
##Как настроить Windows Server 2016 Spaces Direct для работы с локальными дисками сервера?
##Что такое Storage Replica?
##Разница подходов синхронной и асинхронной репликации.
##Какие технологии репликации для каких задач использовать (DFS-R, Hyper-V Repica, SQL AlwaysOn, Exchange DAG) - и как это комбинируется с новыми возможностями Windows Server 2016?
##Что такое ReFS и чем она отличается в Server 2016 от предыдущих изданий ОС?
##Что даёт использование ReFS для виртуальных машин Hyper-V. Сценарии и возможности.
##Общие изменения Storage технологий в Windows Server 2016.
Storage and performance- Batch processing, WhiptailInternet World
Batch processing allows jobs to run without manual intervention by shifting processing to less busy times. It avoids idling computing resources and allows higher overall utilization. Batch processing provides benefits like prioritizing batch and interactive work. The document then discusses different approaches to batch processing like dedicating all resources to it or sharing resources. It outlines challenges like systems being unavailable during batch processing. The rest of the document summarizes Whiptail's flash storage solutions for accelerating workloads and reducing costs and resources compared to HDDs.
This document discusses best practices for virtualizing databases. It begins with an introduction of the presenters, Michael Corey and Jeff Szastak, who are experts in virtualizing Oracle and SQL Server databases. The document then covers reasons for virtualizing databases, including flexibility, efficiency of resources, and cost savings. It provides examples of large production databases that have been successfully virtualized. The document discusses performance results from testing that show virtualized database performance is typically within 5% of physical performance. It provides recommendations for right-sizing resources and avoiding configurations like BIOS settings that could negatively impact performance. The overall message is that databases can be successfully virtualized while meeting service level agreements by following best practices.
Ibm spectrum scale fundamentals workshop for americas part 2 IBM Spectrum Sca...xKinAnx
This document discusses quorum nodes in Spectrum Scale clusters and recovery from failures. It describes how quorum nodes determine the active cluster and prevent partitioning. The document outlines best practices for quorum nodes and provides steps to recover from loss of a quorum node majority or failure of the primary and secondary configuration servers.
GlusterFS 3.3 includes several new features:
- Granular locking allows block-by-block healing when servers fail and return.
- Replication improvements include proactive self-healing where recovered nodes query peers and a synchronous translator API.
- Unified file and object storage (UFO) provides S3- and Swift-style object access via HTTP or Gluster mounts.
- HDFS compatibility allows running MapReduce jobs on GlusterFS and adding unstructured data to Hadoop.
Are you unsure of the steps needed to get your Continuent Tungsten cluster up-and-running? In this live virtual course, we will teach you how to get from a single database server to a scalable cluster, or from a brittle MySQL replication system to a transparent, manageable Tungsten cluster.
We will discuss the benefits of leveraging Continuent Tungsten clustering with MySQL, and walk you through the steps to implement a Tungsten cluster in Amazon EC2. We'll cover the prerequisites, installing and configuring Tungsten, and best practices that are part of most production installations and proof-of-concepts.
Course Topics:
- Configuring MySQL and the OS for proper installation
- Installing a cross-site cluster
- Schema upgrade on the master database server with minimal application downtime (switch operation)
- Automated failover when a MySQL database server crashes
- Recovery of a failed master to a fully operational slave with a single command (recover operation)
- Switching database operations to a remote site (geo-clustering, cross-site 'switch' operation)
We will also discuss and demonstrate basic operations, such as adding and removing a cluster node, basic monitoring and troubleshooting, and discuss the basic failure scenarios.
Learn how to quickly configure and provision highly optimized Continuent Tungsten deployments in the cloud or on-premises.
This document outlines the key concepts of Google's Bigtable distributed database system. It discusses Bigtable's data model, APIs, implementation details including its use of GFS and Chubby, refinements to improve performance, and lessons learned. The document poses many questions about Bigtable's design and implementation for further discussion.
Bank Data Frank Peterson DB2 10-Early_Experiences_pdfSurekha Parekh
DB2 for z/OS update seminar focused on Bankdata's experiences testing DB2 10 during the beta process. Key items tested included hash access to data, XML engine schema validation, XML multi-versioning, and other new features. Testing revealed surprises around administrative overhead and challenges completing performance tests. Results showed hash access provided CPU savings compared to non-hash access when data is relatively static. XML schema validation was moved to the engine for improved performance.
Abstract - DB2 10 for z/OS - Where we are today, and where we are going. This session will take you through the latest with DB2 10. What functions are customers finding most valuable, what the latest enhancements are, and what is the current status of DB2 10 in the marketplace? We will also take you through the latest on DB2 11, the status of the ESP, and also touch on some industry trends that are influencing the enhancements that we are planning for DB2 in the future.
Learn how you can use the new workload management histograms feature in IBM® DB2® 9.5 for Linux®, UNIX®, and Windows® to better understand your workloads, determine the root cause of system slowdowns related to changes in workload, and easily track adherence to performance Service Level Agreements.
Part 2 IBM db2 content manager API training SlidesMEJDI Med
The document discusses the architecture of IBM DB2 Content Manager V8. It describes the main components as the library server, which manages metadata and access control, and resource managers which store content. It also outlines the clients that interact with the system, such as eClients, Windows clients, and custom clients. The library server communicates with resource managers and clients to index, locate, and retrieve content on behalf of authorized users.
This document discusses considerations for migrating to DB2 10 from earlier versions. It notes that IBM is ending support for DB2 V8 in 2012, prompting many organizations to migrate. Key topics covered include potential issues with skipping versions in migration, features deprecated in later versions, checking software prerequisites, and rebinding plans and packages to adjust to changes in access paths. The document aims to provide guidance on planning a smoother migration process.
Db2 10 memory management uk db2 user group june 2013 [read-only]Laura Hood
DB2 10 provides significant enhancements to memory management that allow for much greater scalability. Key changes include moving most objects above the 2GB bar, enabling larger buffer pools through 1MB page support, and enhanced real storage monitoring. Migrating to DB2 10 requires ensuring sufficient real storage is available, monitoring real storage usage, and addressing other limiting factors before taking advantage of new features to further scale vertically.
Db2 10 memory management uk db2 user group june 2013Carol Davis-Mann
DB2 10 for z/OS includes major enhancements to memory management that allow most DB2 storage objects to reside above the 2GB bar, providing up to a 10x increase in threads per subsystem. This reduces a key scalability limitation. To take advantage of these virtual storage improvements, additional real memory is required, typically a 10-30% increase over DB2 9 requirements. Customers should also monitor and manage real storage usage with new DB2 10 functions to avoid paging issues. The virtual storage changes along with other DB2 10 capabilities could allow for reduced DB2 subsystem counts and improved performance.
DB2 10 Webcast #1 Overview And Migration PlanningCarol Davis-Mann
DB2 10 for z/OS provides many new features and performance enhancements over previous versions. Migrating to DB2 10 involves following standard upgrade procedures, meeting all technical prerequisites, moving to conversion mode, then enabling new functions mode. Customers on DB2 8 can also do a "skip migration" directly to DB2 10. IBM offers workshops to help customers plan their DB2 10 migrations.
DB210 Smarter Database IBM Tech Forum 2011Laura Hood
DB2 10 for z/OS is a new version of IBM's database software that provides significant performance improvements, new security and temporal data features, and easier migration paths from prior versions. Key enhancements in DB2 10 include 5-20% CPU reductions, up to 10x more threads per subsystem due to virtual storage improvements, row and column access controls, and built-in support for tracking historical data. Customers running DB2 8 or 9 can upgrade directly to DB2 10 using new "skip migration" functionality, or upgrade sequentially from earlier versions. Migrating to DB2 10 requires meeting prerequisites and following steps to move to conversion mode and then normal mode.
DB2 10 Webcast #2 - Justifying The UpgradeLaura Hood
This document discusses justifying an upgrade from DB2 9 or 8 to DB2 10 for z/OS. It outlines potential CPU, productivity, and availability savings from the upgrade. CPU savings can come from improved performance in conversion mode through features like high performance database application transition support. Productivity savings may result from features that improve plan stability and temporal tables. Availability improvements like online reorganization of LOBs can reduce downtime costs. The presentation recommends using IBM's DB2 10 Business Value Assessment Estimator Tool to quantify specific savings for an organization.
This document discusses justifying an upgrade from DB2 9 or 8 to DB2 10 for z/OS. It outlines potential CPU, productivity, and availability savings from the upgrade. CPU savings can come from improved performance in conversion mode through features like high performance database access threads. Productivity savings may result from reduced subsystem consolidation time. Availability improvements like online REORG for LOBs can reduce downtime costs. The presentation recommends using IBM's Business Value Assessment Estimator Tool to quantify specific savings for an organization.
DB2 Design for High Availability and ScalabilitySurekha Parekh
Are you overwhelmed by the growing amount of data in your environment? Are you maximizing application availability? As the number of tables with billions of rows continues to grow, so do the management challenges. In this session, we will discuss the challenges and solutions for optimum availability and performance, with techniques to efficiently and effectively manage very large amounts of data.
DB2 11 for z/OS Migration Planning and Early Customer ExperiencesJohn Campbell
This extensive presentation provides help and guidance to help DB2 for z/OS customer migrate as quickly as possible, but safely to V11. The material will provide additional planning information, share customer customer experiences and best practices.
IBM DB2 Analytics Accelerator Trends & Directions by Namik Hrle Surekha Parekh
IBM DB2 Analytics Accelerator has drawn lots of attention from DB2 for z/OS users. In many respects it presents itself as just another DB2 access path (but what a powerful one!) and its deep integration into DB2 as well as application transparency makes it one of the most exciting DB2 enhancements in years. The IBM DB2 Analytics Accelerator complements DB2 by adding industry leading data intensive complex query performance thanks to being powered by the Netezza engine and enhances DB2 to the ultimate database management system that delivers the best of both worlds: transactional as well as analytical workloads. This presentation brings the latest news from the IDAA development and shows the trends and directions in which this technology develops.
IBM Analytics Accelerator Trends & Directions Namk Hrle Surekha Parekh
IBM DB2 Analytics Accelerator has drawn lots of attention from DB2 for z/OS users. In many respects it presents itself as just another DB2 access path (but what a powerful one!) and its deep integration into DB2 as well as application transparency makes it one of the most exciting DB2 enhancements in years. The IBM DB2 Analytics Accelerator complements DB2 by adding industry leading data intensive complex query performance thanks to being powered by the Netezza engine and enhances DB2 to the ultimate database management system that delivers the best of both worlds: transactional as well as analytical workloads. This presentation brings the latest news from the IDAA development and shows the trends and directions in which this technology develops.
DB2 for z/OS Real Storage Monitoring, Control and PlanningJohn Campbell
Just added another hot DB2 topic around DB2 for z/OS Real Storage Monitoring, Control and Planning - Check it out and make sure your system runs safely
IMS 14 includes many new features to improve agility, application deployment and management, integration with DB2, business growth capabilities, infrastructure enhancements, and database and transaction manager enhancements. Key highlights include enhancements to support dynamic database changes, catalog management of resources, OSAM and DEDB improvements, SQL aggregation functions, DBRC and FDBR enhancements, reduced TCO, and cascaded transaction support across LPARs.
This document discusses the benefits of IBM DB2 software in SAP environments. It provides examples of customers like Colgate-Palmolive and Coca-Cola Bottling Co. that achieved significant cost savings and performance improvements after migrating their SAP systems from Oracle to IBM DB2. One Swiss customer tested DB2 and Oracle on comparable hardware and found DB2 performed 48% better while using 30% less memory. DB2 also provided greater data compression and backup compression. The document outlines other advantages of DB2 like reduced storage needs, improved OLTP and OLAP performance, and lower licensing costs.
Pure Genius: How To Get Mainframe-Like Scalability & Availability For Midrange DB2 discusses pureScale, an optional feature for DB2 that implements shared-disk clustering to provide high scalability and availability. It can support up to 128 members. The architecture uses a shared database, coordination facilities, and InfiniBand networking. Customers experience scalability gains, easy installation, and resilience like continued operation despite coordination facility failure. The presentation evaluates pureScale's benefits and customer experiences.
DB2 for z/OS is well-suited for managing big data due to its ability to scale, high availability, strong security, and high performance. It has supported some of the largest databases and workloads in the world. Migrating to DB2 10 for z/OS provides improvements like reduced CPU usage, more concurrency, and online changes without downtime. DB2 for z/OS also has a long history and maturity as a mission-critical database.
The document does not contain enough content to summarize. It only contains the word "Adv" which provides no meaningful context or information to extract a multi-sentence summary from.
The document provides an overview of the IBM DS8000 storage system and its capabilities for data protection and cyber resiliency. Some key points:
- The DS8000 offers balanced performance, reliability, scalability, and flexibility for critical enterprise storage needs.
- It provides modern data protection features like data encryption, thin provisioning, and IBM Database Protection.
- The system is designed for cyber resiliency with functions that optimize caching, prefetching, and data placement to improve I/O performance.
Similar to DbB 10 Webcast #3 The Secrets Of Scalability (20)
This document discusses a security issue that occurred when improperly configuring DB2 federation. Specifically:
1. A client site configured DB2-LDAP federation but also enabled the FED_NOAUTH parameter, bypassing authentication.
2. This meant any user could connect to the database as any other user without providing the correct password.
3. If the database owner username was guessed, full access to all data could be obtained, potentially exposing the database to a major security breach.
The issue was caused by incorrectly enabling the FED_NOAUTH parameter when federation was set up. Proper authentication should have occurred at the database rather than being bypassed. The moral is to not enable
What do you do when disaster strikes? In part 9 of our DB2 Support Nightmare series we look at another DB2 disaster scenario and how it was resolved by the experts at Triton Consulting.
Number 8 in our Top 10 DB2 Support Nightmares series. This month we take a look at what happens when organisations are not able to keep up to date with the latest DB2 technology.
Imagine the scene – a broken database on an unsupported version of DB2, with no backups or log files to recover the database.
Yes – this one really was the stuff of nightmares!
Download if you dare! In part six of our DB2 Nightmares series we see what can happen when an experienced DBA goes on holiday leaving the Junior DBA in charge with no support.
Consultancy on Demand is a specially designed service for customers who need varying levels of DB2 support throughout the year.
You purchase a block of 20, 50 or 100 hours. You can then call off hours as and when you need them. No commitment required!
A Time Traveller's Guide to DB2: Technology Themes for 2014 and BeyondLaura Hood
This document discusses technology themes for DB2 in 2014 and beyond, including cost reduction, high availability, in-memory computing, skills availability, database commoditization, and big data. It summarizes DB2's focus on these areas today and potential future directions, such as further optimization to reduce software licensing fees, expanded data sharing capabilities, increased memory capacities, evolving skills needs, and continued integration with big data platforms. The document aims to help DB2 professionals consider strategies for addressing these themes.
A junior DBA accidentally deleted all rows from a critical table in a pre-production environment. The DBA had connected to the wrong system and used the instance owner userid. The system administrator had enabled the FED_NOAUTH parameter, which bypasses authentication at the instance level. This meant any user could connect as any other user without the correct password and impact the database. The moral is that unintended consequences can occur from small configuration changes and it is important to get skilled DB2 support.
DB2 10 for z/OS introduced temporal data support which allows applications to query data as it existed at different points in time. The document discusses system temporal tables, business temporal tables, and bi-temporal tables. It provides examples of temporal DDL, SELECT extensions for querying historical data, and discusses early experiences and performance considerations with temporal data in DB2 10.
DB2DART is a tool that allows DBAs to inspect, format, and repair DB2 databases and objects. It can be used to handle storage reclamation issues by lowering high water marks, detect and repair index corruption, extract data from corrupt tables, and remove backup pending states. DB2DART provides granular analysis at the database, tablespace, and table level and its repair capabilities save DBAs from having to call support or restore from backups in many cases.
Temporal And Other DB2 10 For Z Os HighlightsLaura Hood
The document discusses DB2 10 for z/OS and its new temporal data support feature. It provides an overview of DB2 10, describing new features such as temporal data, virtual storage enhancements, and optimizer enhancements. It then discusses temporal data concepts in more detail, including temporal tables, periods, business temporal tables and system temporal tables. The document provides examples and explains how to implement temporal tables in DB2 10. It concludes by listing further reading materials on DB2 10.
The document discusses IBM's pureScale technology which allows DB2 databases to scale up to 128 nodes for high availability and scalability. PureScale forms a shared-disk cluster and uses proven "data sharing" technology from DB2 for z/OS. It provides agility to rapidly scale up or down capacity as needed with little application change. The company Triton built a basic 2-node pureScale cluster within a budget of under £1K to validate IBM's claims and gain hands-on experience. Their testing showed the cluster delivered 1000 transactions per second under load. The summary concludes that pureScale provides robust clustering with excellent price/performance.
Episode 4 DB2 pureScale Performance Webinar Oct 2010Laura Hood
DB2 pureScale provides scalability and high performance through its clustered database architecture. It uses a cluster caching facility to manage data consistency across member nodes and leverage low-latency interconnects like InfiniBand. The architecture features two-level buffer pool caching between local and global pools for improved read performance. Monitoring and tuning focuses on optimizing buffer pool hit ratios at both levels. Initial proof points showed near-linear scalability up to 12 nodes and over 80% scalability even at 128 nodes, demonstrating the architecture's ability to transparently scale database workloads across many servers.
DB2 pureScale provides high availability and continuous operations by automatically recovering from component failures through workload redistribution and fast in-flight transaction recovery. It protects databases by balancing workloads across nodes and uses duplexed secondary components to tolerate multiple simultaneous node failures while keeping other nodes online and services available.
DB2 pureScale provides a highly scalable and available database solution. It allows customers to start small and grow capacity easily by adding additional cluster members without disrupting applications or incurring extra costs. DB2 pureScale uses a shared nothing architecture with each member running on its own server. It provides a single system view to clients and automatically balances workload across members. Critical features include unlimited scalability, continuous availability even during member failures, and the ability to perform maintenance without outages.
DB2 pureScale provides unlimited scalability, application transparency, and continuous availability for transaction processing and ERP workloads. It uses a shared-nothing architecture where multiple database instances (members) connect to a single database and cooperate to provide a single system image to clients. PowerHA pureScale technology handles global bufferpool and locking management to maintain data consistency as members scale out.
1. DB2 10 – The Secrets of
Scalability
Jeff Josten Julian Stuhler
Distinguished Engineer Principal Consultant
DB2 for z/OS Development Triton Consulting
The Information Management Specialists
2. Triton Migration Month
• Series of DB2 10 Webcasts
DB2 10 Overview - Get Ready to Plan your
Migration
3rd November 16:00-17:00 GMT
DB2 10: Justifying the Upgrade
10th November 16:00-17:00 GMT
DB2 10 - The Secrets of Scalability
1st December 16:00-17:00 GMT
The Information Management Specialists
3. Agenda
• Introduction
• The Need for Scalability
• DB2 10 Scalability Enhancements
Virtual storage constraint relief
Latch contention reduction
Catalog concurrency enhancements
SMF compression
Other scalability enhancements
• Summary & Questions
The Information Management Specialists
4. Introduction
• Julian Stuhler • Jeff Josten
Director and Principal Consultant at IBM Distinguished Engineer
Triton Consulting Lead architect, DB2 for z/OS
24 years DB2 experience, 19 as a
consultant working with customers in
UK, Europe and the US
IBM Gold Consultant since 1999
IBM Information Champion
Former IDUG (International DB2 User
Group) President
Author of IBM Redbooks, white papers
and more recently “flashbooks”
Designer of IBM’s new “DB2 10
Business Value Assessment Estimator
Tool”
The Information Management Specialists
5. The Need for Scalability
The Information Management Specialists
6. The Need for Scalability
• IT volumes continue to increase
More applications
More data
More transactions
• Performance is ever more important
Customers need to support workload growth without a drop-off in
performance
• Availability is ever more important
Pressure to reduce both planned and unplanned outages
• End result: each DB2 environment is being asked to work harder, with less
downtime
• Every DB2 release attempts to push back these boundaries, but major
progress has been made in DB2 10
The Information Management Specialists
7. DB2 10 for z/OS
• Extensive beta program running throughout 2009/10, with customers
from all around the world
• Generally available since October 2010
• Support for skip migration from V8 as well as DB2 9
Will make cost case for upgrade even more compelling
• Excellent uptake
First customers now running DB2 10 in production
Compared to DB2 9 at 12 months after GA
► 3 x number of customers running DB2 10
► 4 x number of DB2 10 licences
► 3 x total number of MSUs
Many customers are planning their DB2 10 upgrades now, with most
intending to begin real work in the next 6-18 months
The Information Management Specialists
8. Top New Features
• CPU/Performance Improvements • Optimiser enhancements
• Virtual Storage Enhancements • MEMBER CLUSTER for UTS
• Security Extensions • Backup and recovery enhancements
• Improved Catalog Concurrency • Enhanced audit
• Temporal Data • Include additional index columns
• Access Path Management • Enhanced SQL OLAP functions
• pureXML enhancements • Skip Migration (see later)
• Currently Committed semantics
• Automated statistics • And many more….
• Dynamic schema change
enhancements
• In-memory object support
The Information Management Specialists
10. Overview
• Virtual storage constraint relief
• Latch contention reduction
• Catalog concurrency enhancements
• SMF compression
• Other scalability enhancements
The Information Management Specialists
11. Virtual Storage Enhancements
• V8 began a major project to transform
DB2 into a 64-bit RDBMS
Laid the groundwork and provided
some scalability improvements but a
lot of DBM1 objects remained below
the 2GB bar
• DB2 9 improved things a little, but
only by another 10-15% for most
customers
Practical limit of 300-500 threads per
DB2 subsystem
• DB2 10 moves 80-90% of the
remaining objects above the bar,
resulting in 5-10x improvement in
threads per subsystem (CM)
The Information Management Specialists
13. Virtual Storage Enhancements
• Possibility for less DB2
subsystems (and possibly
less LPARs) in a data sharing
environment
Lower data sharing overhead
Less systems to manage /
maintain
Minimum of 4 members still
recommended for true
continuous availability
The Information Management Specialists
14. Virtual Storage Enhancements
• More space for performance critical storage objects such as
dynamic statement cache
• Potential to reduce legacy OLTP CPU cost through
More use of CICS protected entry threads
More use of RELEASE(DEALLOCATE) with persistent threads
(with trade-off on concurrency)
DB2 10 High-Performance DBATs
• Other limiting factors on vertical scalability still remain
Real storage
ESQA/ECSA (31-bit) storage
Active log write and SMF volumes
The Information Management Specialists
15. Real Storage Enhancements
• For prior releases, z/OS DB2 9 Buffer Pool
always managed DB2
bufferpool pages as 4K
frames
z/OS Storage
• Move to 64-bit architecture
made much larger buffer
pools viable
Bufferpools can use many
millions of pages 4K
Increased z/OS overheads Pages
for page management 4K
Pages
The Information Management Specialists
16. Real Storage Enhancements
• DB2 10 introduces support for
1MB pages to reduce z/OS page DB2 9 Buffer Pool
management overheads
Needs z10 or newer z196 server
Needs bufferpool to be defined
with PGFIX=YES z/OS Storage
z/OS sysprogs must partition real
storage between 1K and 1MB
frames (IESYSnn in PARMLIB)
• Customer testing during beta
program showed CPU reductions 4K
of 0-6% with this feature enabled Pages
1MB
Pages
The Information Management Specialists
17. Storage Enhancements
• Remains critical to ensure that there is no paging in DB2 address
spaces
Plan on additional 10-30% real memory following migration
• Focus changes from virtual memory constraints & monitoring to
real memory constraints & monitoring
See (PM24723) for real storage monitoring and contraction
enhancements – advised not to go into production without this!
• Ensure use of PGFIX=YES to exploit 1MB real storage frames
Many customers still haven’t exploited this feature in their DB2 8 and
DB2 9 systems – significant CPU savings!
Support for 1MB non page-fixed bufferpools in future release
• Ensure you are up to date on z/OS maintenance before using 1MB
pages
The Information Management Specialists
18. Latch contention reduction
• Latch: DB2 mechanism for controlling concurrent events or the use of
system resources
Reported in accounting and statistics traces
Latch wait time can be significant for high-volume environments
• DB2 10 reduces latch contention for a large number of situations,
including:
LC12: Global Transaction ID LC32: Storage Manager
serialization serialization
LC14: Buffer Manager serialization IRLM: IRLM hash contention
LC19: Log write in both data CML: z/OS Cross Memory
sharing and non data sharing Local suspend lock
LC24: EDM thread storage serialization UTSERIAL: Utility serialization
LC24: Buffer Manager serialization lock for SYSLGRNG (removed
LC27: WLM serialization latch for in NFM)
stored procedures and UDF
The Information Management Specialists
19. Catalog Concurrency
• Contention on DB2 catalog is a major ongoing pain for most large DB2
customers
• DB2 10 introduces UTS PBG format for catalog tablespaces in NFM
Internal hashes and links are removed during ENFM processing
Use of row-level locking and reordered row format
Use of new currently committed semantics and other lock avoidance
techniques
No changes to utility jobs are necessary, but some SMS pre-reqs for migration
• Greatly improves access to catalog/directory
REORG SHRLEVEL(CHANGE) for complete catalog/directory
BIND concurrency much improved, but more work required in future releases
– especially with heavy parallel DDL against different databases
The Information Management Specialists
20. Catalog Contention Issues
• Be prepared for some short-term degradation on entry to CM
for single-thread BIND/REBIND processes, until you get to
NFM
PLANMGMT=EXTENDED the default, so multiple copies of access plan
kept in catalog
New indexes are defined, in preparation for hash links to be removed
in NFM
No concurrency improvement until catalog restructure in ENFM
Redbook testing showed worst-case elapsed time increases of 100-
200% and C2 CPU increases of 50-70%
The Information Management Specialists
21. SMF compression
• High transaction volume usually means high SMF
volume, which can become a limiting factor
• Some customers forced to switch off useful accounting
data, or resort to SMF rollup (via ACCUMACC ZPARM)
• New SMF compression feature can provide increased
throughput due to I/O efficiency improvement
Uses z/OS compression service to deliver approx. 60%-90%
compression for 1% CPU cost
The Information Management Specialists
22. SMF compression
• Enabled via new SMFCOMP DSNZPARM (member
scope)
• All data after SMF header is compressed
• Needs vendor support to allow compressed SMF
records to be processed
• New sample DSNTSMFD application to uncompress
SMF data (via PM27872)
• Can be used in conjunction with accounting rollup to
achieve up to 99% reduction
The Information Management Specialists
23. Other scalability enhancements
• Others enhancements in DB2 10
SPT01 restructured: split into several pieces with LOBs used for
larger package sections
Workfile enhancements: support for spanned records to
increase maximum record length, better use of in-memory
workfiles, use of PBG tablespaces
Support for Extended Address Volumes: EAVs theoretically allow
up to 221TB per volume (223GB in z/OS 1.10)
Decrease dataset allocation/deallocation times: using new
function in z/OS 1.12 DB2 startup/shutdown times can be
improved (can be retrofitted to V8 and DB2 9 via APAR)
The Information Management Specialists
25. Summary
• DB2 10 delivers some very significant enhancements
for increasing throughput, supporting more users and
reducing planned downtime
Many of these enhancements available in Conversion
Mode (CM)
• Remember that sufficient real storage is needed to
back any increase in virtual
• If you are still on DB2 V8, remember that support ends
in April 2012
The Information Management Specialists
26. Further Reading
• IBM DB2 10 Home Page
http://www-01.ibm.com/software/data/db2/zos/db2-10/
• White Paper – DB2 10: A Smarter Database for a Smarter Planet
https://www14.software.ibm.com/webapp/iwm/web/signup.do?source=s
w-infomgt&S_PKG=wp-z-db2-smarter
Also available as part of a “flashbook” - ISBN: 1583473610
• DB2 10 for z/OS Performance Topics Redbook (SG24-7942) just out
http://www.redbooks.ibm.com/abstracts/sg247942.html?Open
• IDUG – International DB2 User Group
http://www.idug.org/
The Information Management Specialists
27. Feedback / Questions
Jeff Josten - josten@us.ibm.com
Julian Stuhler – julian.stuhler@triton.co.uk
The Information Management Specialists