MQ version 8 introduces two new major features: 64-bit buffer pools and 8-byte log RBAs. 64-bit buffer pools allow buffer pools to utilize up to 16 exabytes of storage above the bar, increasing maximum buffer pool size and number. 8-byte log RBAs increase the log RBA range from 6 bytes to 8 bytes, expanding the maximum log size before recycling.
IBM Impact 2014 AMC-1876: IBM WebSphere MQ for z/OS: Latest Features Deep DivePaul Dennis
1. Ensure all queue managers in the queue sharing group are upgraded to WebSphere MQ v8.0 and running in NEWFUNC mode.
2. Stop the queue manager you want to migrate.
3. Run the new CSQJUCNV utility to convert the BSDS from 6 byte to 8 byte format while keeping the original BSDS intact.
4. Restart the queue manager using the new 8 byte format BSDS, which will now support log RBAs with 8 byte length.
The document discusses new features in IBM MQ for z/OS Version 8 including 64-bit buffer pools that allow for larger pool sizes and more buffers to improve performance, an increase from 6 to 8 bytes for the log relative byte address to support higher throughput and prevent queue managers from terminating early due to log space issues, and enhancements to channel initiator statistics and accounting data.
HBase HUG Presentation: Avoiding Full GCs with MemStore-Local Allocation BuffersCloudera, Inc.
Todd Lipcon presents a solution to avoid full garbage collections (GCs) in HBase by using MemStore-Local Allocation Buffers (MSLABs). The document outlines that write operations in HBase can cause fragmentation in the old generation heap, leading to long GC pauses. MSLABs address this by allocating each MemStore's data into contiguous 2MB chunks, eliminating fragmentation. When MemStores flush, the freed chunks are large and contiguous. With MSLABs enabled, the author saw basically zero full GCs during load testing. MSLABs improve performance and stability by preventing GC pauses caused by fragmentation.
This document summarizes a presentation about optimizing HBase performance through caching. It discusses how baseline tests showed low cache hit rates and CPU/memory utilization. Reducing the table block size improved cache hits but increased overhead. Adding an off-heap bucket cache to store table data minimized JVM garbage collection latency spikes and improved memory utilization by caching frequently accessed data outside the Java heap. Configuration parameters for the bucket cache are also outlined.
Accelerating HBase with NVMe and Bucket CacheNicolas Poggi
on-Volatile-Memory express (NVMe) standard promises and order of magnitude faster storage than regular SSDs, while at the same time being more economical than regular RAM on TB/$. This talk evaluates the use cases and benefits of NVMe drives for its use in Big Data clusters with HBase and Hadoop HDFS.
First, we benchmark the different drives using system level tools (FIO) to get maximum expected values for each different device type and set expectations. Second, we explore the different options and use cases of HBase storage and benchmark the different setups. And finally, we evaluate the speedups obtained by the NVMe technology for the different Big Data use cases from the YCSB benchmark.
In summary, while the NVMe drives show up to 8x speedup in best case scenarios, testing the cost-efficiency of new device technologies is not straightforward in Big Data, where we need to overcome system level caching to measure the maximum benefits.
HBase Accelerated introduces an in-memory flush and compaction pipeline for HBase to improve performance of real-time workloads. By keeping data in memory longer and avoiding frequent disk flushes and compactions, it reduces I/O and improves read and scan latencies. Evaluation on workloads with high update rates and small working sets showed the new approach significantly outperformed the default HBase implementation by serving most data from memory. Work is ongoing to further optimize the in-memory representation and memory usage.
This document provides an overview of HBase architecture and advanced usage topics. It discusses course credit requirements, HBase architecture components like storage, write path, read path, files, region splits and more. It also covers advanced topics like secondary indexes, search integration, transactions and bloom filters. The document emphasizes that HBase uses log-structured merge trees for efficient data handling and operates at the disk transfer level rather than disk seek level for performance. It also provides details on various classes involved in write-ahead logging.
The document discusses different types of block caches in HBase including LruBlockCache, SlabCache, and BucketCache. It explains that block caching improves performance by storing frequently accessed blocks in faster memory rather than slower disk storage. Each block cache has its own configuration options and memory usage characteristics. Benchmark results show that the off-heap BucketCache provides strong performance due to its use of off-heap memory for the L2 cache.
IBM Impact 2014 AMC-1876: IBM WebSphere MQ for z/OS: Latest Features Deep DivePaul Dennis
1. Ensure all queue managers in the queue sharing group are upgraded to WebSphere MQ v8.0 and running in NEWFUNC mode.
2. Stop the queue manager you want to migrate.
3. Run the new CSQJUCNV utility to convert the BSDS from 6 byte to 8 byte format while keeping the original BSDS intact.
4. Restart the queue manager using the new 8 byte format BSDS, which will now support log RBAs with 8 byte length.
The document discusses new features in IBM MQ for z/OS Version 8 including 64-bit buffer pools that allow for larger pool sizes and more buffers to improve performance, an increase from 6 to 8 bytes for the log relative byte address to support higher throughput and prevent queue managers from terminating early due to log space issues, and enhancements to channel initiator statistics and accounting data.
HBase HUG Presentation: Avoiding Full GCs with MemStore-Local Allocation BuffersCloudera, Inc.
Todd Lipcon presents a solution to avoid full garbage collections (GCs) in HBase by using MemStore-Local Allocation Buffers (MSLABs). The document outlines that write operations in HBase can cause fragmentation in the old generation heap, leading to long GC pauses. MSLABs address this by allocating each MemStore's data into contiguous 2MB chunks, eliminating fragmentation. When MemStores flush, the freed chunks are large and contiguous. With MSLABs enabled, the author saw basically zero full GCs during load testing. MSLABs improve performance and stability by preventing GC pauses caused by fragmentation.
This document summarizes a presentation about optimizing HBase performance through caching. It discusses how baseline tests showed low cache hit rates and CPU/memory utilization. Reducing the table block size improved cache hits but increased overhead. Adding an off-heap bucket cache to store table data minimized JVM garbage collection latency spikes and improved memory utilization by caching frequently accessed data outside the Java heap. Configuration parameters for the bucket cache are also outlined.
Accelerating HBase with NVMe and Bucket CacheNicolas Poggi
on-Volatile-Memory express (NVMe) standard promises and order of magnitude faster storage than regular SSDs, while at the same time being more economical than regular RAM on TB/$. This talk evaluates the use cases and benefits of NVMe drives for its use in Big Data clusters with HBase and Hadoop HDFS.
First, we benchmark the different drives using system level tools (FIO) to get maximum expected values for each different device type and set expectations. Second, we explore the different options and use cases of HBase storage and benchmark the different setups. And finally, we evaluate the speedups obtained by the NVMe technology for the different Big Data use cases from the YCSB benchmark.
In summary, while the NVMe drives show up to 8x speedup in best case scenarios, testing the cost-efficiency of new device technologies is not straightforward in Big Data, where we need to overcome system level caching to measure the maximum benefits.
HBase Accelerated introduces an in-memory flush and compaction pipeline for HBase to improve performance of real-time workloads. By keeping data in memory longer and avoiding frequent disk flushes and compactions, it reduces I/O and improves read and scan latencies. Evaluation on workloads with high update rates and small working sets showed the new approach significantly outperformed the default HBase implementation by serving most data from memory. Work is ongoing to further optimize the in-memory representation and memory usage.
This document provides an overview of HBase architecture and advanced usage topics. It discusses course credit requirements, HBase architecture components like storage, write path, read path, files, region splits and more. It also covers advanced topics like secondary indexes, search integration, transactions and bloom filters. The document emphasizes that HBase uses log-structured merge trees for efficient data handling and operates at the disk transfer level rather than disk seek level for performance. It also provides details on various classes involved in write-ahead logging.
The document discusses different types of block caches in HBase including LruBlockCache, SlabCache, and BucketCache. It explains that block caching improves performance by storing frequently accessed blocks in faster memory rather than slower disk storage. Each block cache has its own configuration options and memory usage characteristics. Benchmark results show that the off-heap BucketCache provides strong performance due to its use of off-heap memory for the L2 cache.
The document provides an introduction and agenda for an HBase presentation. It begins with an overview of HBase and discusses why relational databases are not scalable for big data through examples of a growing website. It then introduces concepts of HBase including its column-oriented design and architecture. The document concludes with hands-on examples of installing HBase and performing basic operations through the HBase shell.
HBase 2.0 is the next stable major release for Apache HBase scheduled for early 2017. It is the biggest and most exciting milestone release from the Apache community after 1.0. HBase-2.0 contains a large number of features that is long time in the development, some of which include rewritten region assignment, perf improvements (RPC, rewritten write pipeline, etc), async clients, C++ client, offheaping memstore and other buffers, Spark integration, shading of dependencies as well as a lot of other fixes and stability improvements. We will go into technical details on some of the most important improvements in the release, as well as what are the implications for the users in terms of API and upgrade paths. Existing users of HBase/Phoenix as well as operators managing HBase clusters will benefit the most where they can learn about the new release and the long list of features. We will also briefly cover earlier 1.x release lines and compatibility and upgrade paths for existing users and conclude by giving an outlook on the next level of initiatives for the project.
Now that you've seen Base 1.0, what's ahead in HBase 2.0, and beyond—and why? Find out from this panel of people who have designed and/or are working on 2.0 features.
This document discusses tuning HBase and HDFS for performance and correctness. Some key recommendations include:
- Enable HDFS sync on close and sync behind writes for correctness on power failures.
- Tune HBase compaction settings like blockingStoreFiles and compactionThreshold based on whether the workload is read-heavy or write-heavy.
- Size RegionServer machines based on disk size, heap size, and number of cores to optimize for the workload.
- Set client and server RPC chunk sizes like hbase.client.write.buffer to 2MB to maximize network throughput.
- Configure various garbage collection settings in HBase like -Xmn512m and -XX:+UseCMSInit
Rigorous and Multi-tenant HBase PerformanceCloudera, Inc.
The document discusses techniques for rigorously measuring Apache HBase performance in both standalone and multi-tenant environments. It introduces the Yahoo! Cloud Serving Benchmark (YCSB) and best practices for cluster setup, workload generation, data loading, and measurement. These include pre-splitting tables, warming caches, setting target throughput, and using appropriate workload distributions. The document also covers challenges in achieving good multi-tenant performance across HBase, MapReduce and Apache Solr.
The tech talk was gieven by Ranjeeth Kathiresan, Salesforce Senior Software Engineer & Gurpreet Multani, Salesforce Principal Software Engineer in June 2017.
The document discusses techniques for improving performance on IBM mainframes by using large pages (1MB pages) and giant pages (2GB pages). It explains that traditional 4KB pages cause performance issues due to frequent translation lookaside buffer (TLB) misses during address translation. Using larger pages reduces the number of pages and TLB entries needed, improving performance. It provides details on how large and giant pages are configured and used in IBM mainframes running z/OS.
At StampedeCon 2012 in St. Louis, Pritam Damania presents: Reliable backup and recovery is one of the main requirements for any enterprise grade application. HBase has been very well embraced by enterprises needing random, real-time read/write access with huge volumes of data and ease of scalability. As such, they are looking for backup solutions that are reliable, easy to use, and can co-exist with existing infrastructure. HBase comes with several backup options but there is a clear need to improve the native export mechanisms. This talk will cover various options that are available out of the box, their drawbacks and what various companies are doing to make backup and recovery efficient. In particular it will cover what Facebook has done to improve performance of backup and recovery process with minimal impact to production cluster.
Twitter uses HBase for mutable batch processing data storage and operational intelligence. They run HBase 0.94 on Hadoop 2.0 across multiple clusters managed by Puppet. HBase stores operational audit logs and supports Python apps. hRaven stores and analyzes MapReduce job metrics to optimize Pig reducers, plan cluster capacity, and troubleshoot problems. It tracks 12.6M jobs across flows, clusters, users and versions.
The document summarizes the HBase 1.0 release which introduces major new features and interfaces including a new client API, region replicas for high availability, online configuration changes, and semantic versioning. It describes goals of laying a stable foundation, stabilizing clusters and clients, and making versioning explicit. Compatibility with earlier versions is discussed and the new interfaces like ConnectionFactory, Connection, Table and BufferedMutator are introduced along with examples of using them.
This document discusses potential future changes to the topology and architecture of HBase clusters. It describes drivers for changes like supporting clusters with over 1 million regions and improved high availability. Specific proposals discussed include co-locating the HMaster and metadata, splitting or not splitting the metadata region, compacting the in-memory metadata, removing dependencies on ZooKeeper, implementing multiple active masters, and partitioning the master responsibilities. The talk encourages joining online discussions to provide input on these proposals.
High-Performance Big Data Analytics with RDMA over NVM and NVMe-SSDinside-BigData.com
In this deck from the 2018 OpenFabrics Workshop, Xiaoyi Lu from OSU presents: High-Performance Big Data Analytics with RDMA over NVM and NVMe-SSD.
"The convergence of Big Data and HPC has been pushing the innovation of accelerating Big Data analytics and management on modern HPC clusters. Recent studies have shown that the performance of Apache Hadoop, Spark, and Memcached can be significantly improved by leveraging the high-performance networking technologies, such as Remote Direct Memory Access (RDMA). Most of these studies are based on `DRAM+RDMA' schemes. On the other hand, Non-Volatile Memory (NVM) and NVMe-SSD technologies can support RDMA access with low-latency, high-throughput, and persistence on HPC clusters. NVMs and NVMe-SSDs provide the opportunity to build novel high-performance and QoS-aware communication and I/O subsystems for data-intensive applications. In this talk, we propose new communication and I/O schemes for these data analytics stacks, which are designed with RDMA over NVM and NVMe-SSD. Our studies show that the proposed designs can significantly improve the communication, I/O, and application performance for Big Data analytics and management middleware, such as Hadoop, Spark, Memcached, etc. In addition, we will also discuss how to design QoS-aware schemes in these frameworks with NVMe-SSD."
Watch the video: https://wp.me/p3RLHQ-iyB
Learn more: http://web.cse.ohio-state.edu/~lu.932/
and
https://www.openfabrics.org/index.php/2018-ofa-workshop-presentations.html
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Hadoop Summit 2012 | HBase Consistency and Performance ImprovementsCloudera, Inc.
The latest Apache HBase releases, 0.92 and 0.94, contain many improvements over prior releases in terms of correctness and performance improvements. We discuss a couple of these improvements from a development and operations perspective. For correctness, we discuss the ACID guarantees of HBase, give a case study of problems with earlier releases, and give an overview of the implementation internals that were improved to fix the issues. For performance, we discuss recent improvements in 0.94 and how to monitor the performance of a cluster with new metrics.
HBase and HDFS: Understanding FileSystem Usage in HBaseenissoz
This document discusses file system usage in HBase. It provides an overview of the three main file types in HBase: write-ahead logs (WALs), data files, and reference files. It describes durability semantics, IO fencing techniques for region server recovery, and how HBase leverages data locality through short circuit reads, checksums, and block placement hints. The document is intended help understand HBase's interactions with HDFS for tuning IO performance.
This document provides an overview of NVM compression, a hybrid flash-aware application level compression solution. It discusses the drawbacks of existing row-level compression in MySQL and outlines an architecture for NVM compression that avoids these drawbacks. Key aspects of the NVM compression approach include performing compression only during flush, using sparse addressing to avoid over-provisioning flash space, and adding a new multi-threaded flush framework. Evaluation results and building blocks of the solution are also briefly mentioned.
IBM WebSphere MQ for z/OS V8 - Latest Features Deep DiveDamon Cross
WebSphere MQ for z/OS V8 makes use of many system features and facilities to provide a very
high level of availability and performance for your messages. Come along to this session to learn
the detail behind all the new features and enhancements in the latest release of WebSphere MQ
for z/OS.
Once the ‘Backup Database’ command executed, SQL Server automatically does few ‘Checkpoint’ to reduce the recovery time and also it makes sure that at point of command execution there is no dirty pages in the buffer pool. After that SQL Server creates at least three workers as ‘Controller’, ‘Stream Reader’ and ‘Stream Writer’ to read and buffer the data asynchronously into the buffer area (Out of buffer pool) and write the buffers into the backup device.
WebSphere Portal Version 6.0 Web Content Management and DB2 Tuning GuideTan Nguyen Phi
This document provides tuning guidelines for IBM WebSphere Portal Version 6.0 with Web Content Management (WCM) running on DB2. It describes how to optimize the application server, database, and various configuration parameters. Specific recommendations are given for JVM settings like heap size, enabling large pages, and pinning clusters. Database tuning tips include setting registry variables and configuration parameters. Ongoing maintenance activities like monitoring, reorganizations, and statistics collection are also outlined.
This document summarizes a presentation on HPC storage and IO trends and workflows. It discusses how economic modeling has led to the proliferation of multiple storage layers, including burst buffers, parallel file systems, campaign storage, and archives. Workflows are presented as a way to help procurement by specifying application needs rather than specific hardware requirements. The use of workflows in economic modeling and procurement could help guide future machine architectures and integration of storage tiers.
This document summarizes a presentation about Ceph, an open-source distributed storage system. It discusses Ceph's introduction and components, benchmarks Ceph's block and object storage performance on Intel architecture, and describes optimizations like cache tiering and erasure coding. It also outlines Intel's product portfolio in supporting Ceph through optimized CPUs, flash storage, networking, server boards, software libraries, and contributions to the open source Ceph community.
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Odinot Stanislas
Après la petite intro sur le stockage distribué et la description de Ceph, Jian Zhang réalise dans cette présentation quelques benchmarks intéressants : tests séquentiels, tests random et surtout comparaison des résultats avant et après optimisations. Les paramètres de configuration touchés et optimisations (Large page numbers, Omap data sur un disque séparé, ...) apportent au minimum 2x de perf en plus.
The document provides an introduction and agenda for an HBase presentation. It begins with an overview of HBase and discusses why relational databases are not scalable for big data through examples of a growing website. It then introduces concepts of HBase including its column-oriented design and architecture. The document concludes with hands-on examples of installing HBase and performing basic operations through the HBase shell.
HBase 2.0 is the next stable major release for Apache HBase scheduled for early 2017. It is the biggest and most exciting milestone release from the Apache community after 1.0. HBase-2.0 contains a large number of features that is long time in the development, some of which include rewritten region assignment, perf improvements (RPC, rewritten write pipeline, etc), async clients, C++ client, offheaping memstore and other buffers, Spark integration, shading of dependencies as well as a lot of other fixes and stability improvements. We will go into technical details on some of the most important improvements in the release, as well as what are the implications for the users in terms of API and upgrade paths. Existing users of HBase/Phoenix as well as operators managing HBase clusters will benefit the most where they can learn about the new release and the long list of features. We will also briefly cover earlier 1.x release lines and compatibility and upgrade paths for existing users and conclude by giving an outlook on the next level of initiatives for the project.
Now that you've seen Base 1.0, what's ahead in HBase 2.0, and beyond—and why? Find out from this panel of people who have designed and/or are working on 2.0 features.
This document discusses tuning HBase and HDFS for performance and correctness. Some key recommendations include:
- Enable HDFS sync on close and sync behind writes for correctness on power failures.
- Tune HBase compaction settings like blockingStoreFiles and compactionThreshold based on whether the workload is read-heavy or write-heavy.
- Size RegionServer machines based on disk size, heap size, and number of cores to optimize for the workload.
- Set client and server RPC chunk sizes like hbase.client.write.buffer to 2MB to maximize network throughput.
- Configure various garbage collection settings in HBase like -Xmn512m and -XX:+UseCMSInit
Rigorous and Multi-tenant HBase PerformanceCloudera, Inc.
The document discusses techniques for rigorously measuring Apache HBase performance in both standalone and multi-tenant environments. It introduces the Yahoo! Cloud Serving Benchmark (YCSB) and best practices for cluster setup, workload generation, data loading, and measurement. These include pre-splitting tables, warming caches, setting target throughput, and using appropriate workload distributions. The document also covers challenges in achieving good multi-tenant performance across HBase, MapReduce and Apache Solr.
The tech talk was gieven by Ranjeeth Kathiresan, Salesforce Senior Software Engineer & Gurpreet Multani, Salesforce Principal Software Engineer in June 2017.
The document discusses techniques for improving performance on IBM mainframes by using large pages (1MB pages) and giant pages (2GB pages). It explains that traditional 4KB pages cause performance issues due to frequent translation lookaside buffer (TLB) misses during address translation. Using larger pages reduces the number of pages and TLB entries needed, improving performance. It provides details on how large and giant pages are configured and used in IBM mainframes running z/OS.
At StampedeCon 2012 in St. Louis, Pritam Damania presents: Reliable backup and recovery is one of the main requirements for any enterprise grade application. HBase has been very well embraced by enterprises needing random, real-time read/write access with huge volumes of data and ease of scalability. As such, they are looking for backup solutions that are reliable, easy to use, and can co-exist with existing infrastructure. HBase comes with several backup options but there is a clear need to improve the native export mechanisms. This talk will cover various options that are available out of the box, their drawbacks and what various companies are doing to make backup and recovery efficient. In particular it will cover what Facebook has done to improve performance of backup and recovery process with minimal impact to production cluster.
Twitter uses HBase for mutable batch processing data storage and operational intelligence. They run HBase 0.94 on Hadoop 2.0 across multiple clusters managed by Puppet. HBase stores operational audit logs and supports Python apps. hRaven stores and analyzes MapReduce job metrics to optimize Pig reducers, plan cluster capacity, and troubleshoot problems. It tracks 12.6M jobs across flows, clusters, users and versions.
The document summarizes the HBase 1.0 release which introduces major new features and interfaces including a new client API, region replicas for high availability, online configuration changes, and semantic versioning. It describes goals of laying a stable foundation, stabilizing clusters and clients, and making versioning explicit. Compatibility with earlier versions is discussed and the new interfaces like ConnectionFactory, Connection, Table and BufferedMutator are introduced along with examples of using them.
This document discusses potential future changes to the topology and architecture of HBase clusters. It describes drivers for changes like supporting clusters with over 1 million regions and improved high availability. Specific proposals discussed include co-locating the HMaster and metadata, splitting or not splitting the metadata region, compacting the in-memory metadata, removing dependencies on ZooKeeper, implementing multiple active masters, and partitioning the master responsibilities. The talk encourages joining online discussions to provide input on these proposals.
High-Performance Big Data Analytics with RDMA over NVM and NVMe-SSDinside-BigData.com
In this deck from the 2018 OpenFabrics Workshop, Xiaoyi Lu from OSU presents: High-Performance Big Data Analytics with RDMA over NVM and NVMe-SSD.
"The convergence of Big Data and HPC has been pushing the innovation of accelerating Big Data analytics and management on modern HPC clusters. Recent studies have shown that the performance of Apache Hadoop, Spark, and Memcached can be significantly improved by leveraging the high-performance networking technologies, such as Remote Direct Memory Access (RDMA). Most of these studies are based on `DRAM+RDMA' schemes. On the other hand, Non-Volatile Memory (NVM) and NVMe-SSD technologies can support RDMA access with low-latency, high-throughput, and persistence on HPC clusters. NVMs and NVMe-SSDs provide the opportunity to build novel high-performance and QoS-aware communication and I/O subsystems for data-intensive applications. In this talk, we propose new communication and I/O schemes for these data analytics stacks, which are designed with RDMA over NVM and NVMe-SSD. Our studies show that the proposed designs can significantly improve the communication, I/O, and application performance for Big Data analytics and management middleware, such as Hadoop, Spark, Memcached, etc. In addition, we will also discuss how to design QoS-aware schemes in these frameworks with NVMe-SSD."
Watch the video: https://wp.me/p3RLHQ-iyB
Learn more: http://web.cse.ohio-state.edu/~lu.932/
and
https://www.openfabrics.org/index.php/2018-ofa-workshop-presentations.html
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Hadoop Summit 2012 | HBase Consistency and Performance ImprovementsCloudera, Inc.
The latest Apache HBase releases, 0.92 and 0.94, contain many improvements over prior releases in terms of correctness and performance improvements. We discuss a couple of these improvements from a development and operations perspective. For correctness, we discuss the ACID guarantees of HBase, give a case study of problems with earlier releases, and give an overview of the implementation internals that were improved to fix the issues. For performance, we discuss recent improvements in 0.94 and how to monitor the performance of a cluster with new metrics.
HBase and HDFS: Understanding FileSystem Usage in HBaseenissoz
This document discusses file system usage in HBase. It provides an overview of the three main file types in HBase: write-ahead logs (WALs), data files, and reference files. It describes durability semantics, IO fencing techniques for region server recovery, and how HBase leverages data locality through short circuit reads, checksums, and block placement hints. The document is intended help understand HBase's interactions with HDFS for tuning IO performance.
This document provides an overview of NVM compression, a hybrid flash-aware application level compression solution. It discusses the drawbacks of existing row-level compression in MySQL and outlines an architecture for NVM compression that avoids these drawbacks. Key aspects of the NVM compression approach include performing compression only during flush, using sparse addressing to avoid over-provisioning flash space, and adding a new multi-threaded flush framework. Evaluation results and building blocks of the solution are also briefly mentioned.
IBM WebSphere MQ for z/OS V8 - Latest Features Deep DiveDamon Cross
WebSphere MQ for z/OS V8 makes use of many system features and facilities to provide a very
high level of availability and performance for your messages. Come along to this session to learn
the detail behind all the new features and enhancements in the latest release of WebSphere MQ
for z/OS.
Once the ‘Backup Database’ command executed, SQL Server automatically does few ‘Checkpoint’ to reduce the recovery time and also it makes sure that at point of command execution there is no dirty pages in the buffer pool. After that SQL Server creates at least three workers as ‘Controller’, ‘Stream Reader’ and ‘Stream Writer’ to read and buffer the data asynchronously into the buffer area (Out of buffer pool) and write the buffers into the backup device.
WebSphere Portal Version 6.0 Web Content Management and DB2 Tuning GuideTan Nguyen Phi
This document provides tuning guidelines for IBM WebSphere Portal Version 6.0 with Web Content Management (WCM) running on DB2. It describes how to optimize the application server, database, and various configuration parameters. Specific recommendations are given for JVM settings like heap size, enabling large pages, and pinning clusters. Database tuning tips include setting registry variables and configuration parameters. Ongoing maintenance activities like monitoring, reorganizations, and statistics collection are also outlined.
This document summarizes a presentation on HPC storage and IO trends and workflows. It discusses how economic modeling has led to the proliferation of multiple storage layers, including burst buffers, parallel file systems, campaign storage, and archives. Workflows are presented as a way to help procurement by specifying application needs rather than specific hardware requirements. The use of workflows in economic modeling and procurement could help guide future machine architectures and integration of storage tiers.
This document summarizes a presentation about Ceph, an open-source distributed storage system. It discusses Ceph's introduction and components, benchmarks Ceph's block and object storage performance on Intel architecture, and describes optimizations like cache tiering and erasure coding. It also outlines Intel's product portfolio in supporting Ceph through optimized CPUs, flash storage, networking, server boards, software libraries, and contributions to the open source Ceph community.
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Odinot Stanislas
Après la petite intro sur le stockage distribué et la description de Ceph, Jian Zhang réalise dans cette présentation quelques benchmarks intéressants : tests séquentiels, tests random et surtout comparaison des résultats avant et après optimisations. Les paramètres de configuration touchés et optimisations (Large page numbers, Omap data sur un disque séparé, ...) apportent au minimum 2x de perf en plus.
The document discusses page fixing and DB2 bufferpools. It explains that fixing a page means it cannot be removed from central storage, avoiding I/O operations. It recommends fixing DB2 bufferpool pages permanently to save CPU cycles spent on fixing and freeing buffers for I/O, potentially saving over 20% of DB2 CPU time. Monitoring central storage indicators is advised after enabling permanent page fixing.
This document discusses using BVQ software to perform a cost-effective storage planning analysis for migrating a 40TB mail system from 10k RPM disks to lower-cost 7.2k RPM disks. By analyzing I/O workload characteristics including read/write ratios and cache hit rates, the BVQ software determines that 48 7.2k disks would provide the required 2500 IOPS, reducing hardware costs. Additional ideas are provided to potentially reduce the required disk count further, such as challenging backup duration assumptions.
Optimierung Ihrer SAP Infrastruktur mit der AWS PlattformAWS Germany
In diesem technisch orientierten Vortrag präsentieren wir Optionen, um Ihre SAP Umgebungen auf der Amazon Web Services (AWS) Plattform zu betreiben. Wir sprechen über die verschiedenen zertifizierten virtuellen Maschinen, Speicherlösungen und Performance-Aspekte. Zudem geben wir Anregungen, wie Sie Kosten und Performance ihrer SAP-Landschaft auf AWS optimieren können.
Save 60% of Kubernetes storage costs on AWS & others with OpenEBSMayaData Inc
With features like thin provisioning, per workload replication and snapshots, using OpenEBS can lower your storage TCO on any Kubernetes cloud by up to 60%. In this webinar you will see with in depth examples of the method a MayaData OpenEBS Enterprise customer used to save $ 75,000 a month.
AWS Webcast - Best Practices for Deploying SAP Workloads on AWSAmazon Web Services
With AWS, it is easier for enterprises to deploy SAP workloads such as HANA and Business Suite. In this webinar, you'll learn the best practices for deploying SAP HANA and Business Suite on AWS. Additionally, you will learn about architectural considerations and pitfalls to look out for when migrating from on premises to AWS. This webinar will also discuss how Kellogg deployed SAP HANA on AWS and on-premises.
Learning Objectives:
• How to set SAP workloads on AWS
• Migration tips and tricks
• How to set up your architecture for optimal results Who Should Attend:
• Business and technical professionals who use SAP
This document discusses QNAP's TVS-ECx80+ Edge Cloud Turbo vNAS series:
1. It provides centralized management of multiple NAS units through Q'center CMS, allowing monitoring of system status, firmware updates, and logs from one interface.
2. File Station allows easy file management, sharing, and media playback from any browser, along with features like quick search, thumbnails, and a recycle bin.
3. The Qfile mobile app provides access to NAS files on the go for browsing, sharing, and streaming multimedia.
CBlocks - Posix compliant files systems for HDFSDataWorks Summit
With YARN running Docker containers, it is possible to run applications that are not HDFS aware inside these containers. It is hard to customize these applications since most of them assume a Posix file system with rewrite capabilities. In this talk, we will dive into how we created a block storage, how it is being tested internally and the storage containers which makes it all possible.
The storage container framework was developed as part of Ozone (HDFS-7240). This is talk will also explore the current state of Ozone along with CBlocks. This talk will explore architecture of storage containers, how replication is handled, scaling to millions of volumes and I/O performance optimizations.
Containerized Storage for Containers: Why, What and How OpenEBS WorksMatt Baldwin
Los Angeles Kubernetes Meetup - July 20, 2017
Evan Powell, OpenEBS, presents on containerized storage for containers: Why, What and How OpenEBS Works. In this talk Evan will walk through a quick demo of OpenEBS 0.3, showing it at work delivering storage to persistent workloads in containers - such as PostgresSQL or the Spark notebook Jupyter. Then he'll back up and share a bit about the architecture, why they chose Go, and future direction for this open source project.
This document provides an overview of administering a storage area network (SAN) using IBM and Cisco equipment. It discusses SAN concepts like zoning, fabrics, multi-pathing, and installing operating systems from the SAN. The specific setup includes an IBM DS4400 storage subsystem with expansion units, a Cisco 9216 fabric switch, and two IBM blade centers with internal QLogic switches. The document aims to educate IT staff on maintaining this SAN configuration using IBM Storage Manager and Cisco Fabric Manager software.
IBM Spectrum Virtualize is a software defined storage solution that provides storage virtualization, data mobility, protection and copy services. It supports a wide range of storage platforms and can scale to manage over 400 storage arrays. The solution provides agility, efficiency and protection for applications and data.
The document discusses storage virtualization using IBM's Storwize V7000 and SVC storage arrays. It provides an overview of the key benefits of storage virtualization such as reducing complexity, improving availability and enabling better use of tiered storage. It also summarizes the history and enhancements of the SVC software, features of Storwize V7000 such as Easy Tier and support for VMware vSphere.
This document discusses managing space for databases, including:
- Using 4KB sector disks and specifying disk sector sizes when creating databases, data files, and redo log files.
- Transporting tablespaces and databases between platforms using RMAN and Data Pump utilities.
- The process involves making tablespaces read-only, converting data files to the target platform format, importing metadata, and making tablespaces read/write on the target system.
The Lenovo Storage N4610 is a Windows Storage Server NAS appliance that provides up to 72TB of storage capacity. It utilizes the Intel Grantley platform and ThinkServer technologies. Testing showed the N4610 delivered high I/O performance across various access patterns when connected via CIFS and iSCSI, with sequential performance increasing as block size decreased. The N4610 brings strong storage performance for business server infrastructures.
Savio Rodrigues - Cloud Enable Your Ent App - 11th SeptemberIBM Systems UKI
This document discusses IBM's PureApplication product which allows customers to easily enable their enterprise applications for cloud deployment. Some key points:
- PureApplication uses application patterns and an automated hybrid cloud platform to simplify deploying and managing applications across on-premises and off-premises cloud environments.
- Over 200 application patterns have been developed by IBM and partners across various domains to accelerate cloud deployment of existing applications.
- The automated hybrid cloud platform handles complex tasks like provisioning, monitoring, maintenance, and scaling to reduce the manual effort required for cloud application management.
- Customers can purchase the PureApplication Service on SoftLayer to deploy applications in IBM's public cloud infrastructure starting at a monthly price
Andrew Darley Presentation - Pure Future is Hybrid Cloud (London 11th September)IBM Systems UKI
- The document is an agenda for the "PureApplication" event discussing hybrid cloud.
- The keynote speaker is Andrew Darley, Head of Pure Systems at IBM Europe, who will discuss the future of hybrid cloud.
- There are several sessions on accelerating applications to hybrid cloud environments and case studies of organizations using IBM's hybrid cloud solutions.
Icon solutions presentation - Pure Hybrid Cloud Event, 11th September LondonIBM Systems UKI
This document discusses a webMethods pattern for IBM's PureApplication system. It describes how Icon Solutions created a pattern for a large UK retailer that included webMethods Broker and Integration Server to run their point-of-sale system. The pattern allowed the system to automatically scale on PureApplication based on workload, providing a scalable and flexible platform for the retailer. The document also discusses how patterns in general can provide benefits like repeatable deployments, reduced manual intervention, and self-service capabilities.
The document provides an overview of the IBM MQ Light Service for Bluemix. It introduces MQ Light and the MQ Light Service, which allows developers to easily incorporate messaging into their applications. The MQ Light Service supports the MQ Light API and JMS, provides auto-defined JMS queues, and handles connection details. Applications can be developed locally and seamlessly deployed to Bluemix. The document also demonstrates how to use MQ Light with Node.js and Java applications and provides an example demo scenario of a Twitter sentiment analysis application.
This document provides a summary of an IBM presentation on MQ Light (beta), a new messaging API and runtime designed for application developers. MQ Light includes a simple API based on AMQP 1.0, runtimes for on-premise deployment and on IBM Bluemix, and tooling for developer platforms like Node.js. The presentation demonstrates MQ Light's messaging model and deployment options, and encourages attendees to try the beta release.
The Log Manager statistics provide valuable information about logging performance in WebSphere MQ. They show the rate at which control intervals (CIs) are written to the log, which is important because all persistent data changes and object modifications must be logged. High logging volumes can indicate potential I/O issues or that the persistent workload is too great. The statistics help identify if checkpoints are occurring too frequently, logging is inefficient due to small transactions, or if applications are backing out transactions too often. Understanding logging performance is crucial as it usually impacts the performance of persistent messaging.
The document provides an introduction to using WebSphere MQ with WebSphere Application Server and the Liberty Profile. It discusses the WebSphere MQ JMS resource adapter which provides MQ messaging to WAS by harnessing the Java Connector Architecture. The resource adapter allows for easier maintenance and upgrading compared to directly using the JMS API or native bindings. It also covers configuring high availability with multi-instance queue managers.
This document discusses publish/subscribe capabilities in WebSphere MQ. It provides an overview of distributed publish/subscribe using hierarchies and clusters. Key points include:
- Hierarchies propagate subscriptions across connected queue managers, while clusters share subscription knowledge by clustering topic definitions.
- Direct routing sends publications directly between all cluster members, while topic host routing routes via queue managers hosting the clustered topic.
- Proxy subscriptions represent remote subscriptions on a queue manager and are used to propagate publications in hierarchies and clusters.
- The optimal topology depends on factors like scale, traffic patterns, and configuration flexibility needed. Direct clusters optimize publication routing but may not scale as well as hierarchies or topic host clusters.
The document summarizes the key changes in JMS 2.0 compared to JMS 1.1, including a simplified API with JMSContext, JMSProducer, and JMSConsumer interfaces. It discusses the new messaging features like delivery delay and asynchronous send. It also covers updates to the Java EE specification and how JMS 2.0 leverages features in Java 7 like try-with-resources for auto-closing resources. The document is intended to provide an introduction to the new features and gauge adoption of JMS 2.0.
Channel authentication records allow setting rules to control inbound connections to a queue manager. Rules can allow, block, or assign user IDs to connections. Attributes like IP address, hostname, SSL certificate, and user ID can be used to define rules. Obtaining hostnames requires reverse DNS lookup, which can be disabled. Rules can be restricted based on IP/hostname and certificate subject and issuer can be checked.
IBM WebSphere MQ V8 introduces new security features including changes to how channels use SSL/TLS certificates. Channels can now specify their own certificate label instead of using the queue manager's default. This allows a single queue manager to use multiple certificates to communicate with different business partners that require different certificate authorities. Server Name Indication is used to select the certificate during the TLS handshake based on the channel name.
IBM Managing Workload Scalability with MQ ClustersIBM Systems UKI
This document discusses various clustering scenarios for WebSphere MQ, beginning with a simple initial setup and expanding in complexity. It addresses scenarios like workload balancing, high availability during failures, and location dependencies when applications and services are distributed across data centers separated by large distances. Key points covered include using queue aliases, cluster workload priorities, and the AMQSCLM monitoring tool to help direct messages to available instances of services and ensure responses can be routed properly even if client or queue manager failures occur.
IBM MQ V8 delivers enhancements for platform consistency, security, performance and standards compliance. Key updates include 64-bit support on all platforms, integration of additional capabilities into z/OS and IBM i, support for the JMS 2.0 standard, and improvements to .NET and WCF integration. The release also provides stronger encryption algorithms, expanded authentication options including LDAP, and usability enhancements to the runmqsc administration tool.
OpenMetadata Community Meeting - 5th June 2024OpenMetadata
The OpenMetadata Community Meeting was held on June 5th, 2024. In this meeting, we discussed about the data quality capabilities that are integrated with the Incident Manager, providing a complete solution to handle your data observability needs. Watch the end-to-end demo of the data quality features.
* How to run your own data quality framework
* What is the performance impact of running data quality frameworks
* How to run the test cases in your own ETL pipelines
* How the Incident Manager is integrated
* Get notified with alerts when test cases fail
Watch the meeting recording here - https://www.youtube.com/watch?v=UbNOje0kf6E
Flutter is a popular open source, cross-platform framework developed by Google. In this webinar we'll explore Flutter and its architecture, delve into the Flutter Embedder and Flutter’s Dart language, discover how to leverage Flutter for embedded device development, learn about Automotive Grade Linux (AGL) and its consortium and understand the rationale behind AGL's choice of Flutter for next-gen IVI systems. Don’t miss this opportunity to discover whether Flutter is right for your project.
Need for Speed: Removing speed bumps from your Symfony projects ⚡️Łukasz Chruściel
No one wants their application to drag like a car stuck in the slow lane! Yet it’s all too common to encounter bumpy, pothole-filled solutions that slow the speed of any application. Symfony apps are not an exception.
In this talk, I will take you for a spin around the performance racetrack. We’ll explore common pitfalls - those hidden potholes on your application that can cause unexpected slowdowns. Learn how to spot these performance bumps early, and more importantly, how to navigate around them to keep your application running at top speed.
We will focus in particular on tuning your engine at the application level, making the right adjustments to ensure that your system responds like a well-oiled, high-performance race car.
Artificia Intellicence and XPath Extension FunctionsOctavian Nadolu
The purpose of this presentation is to provide an overview of how you can use AI from XSLT, XQuery, Schematron, or XML Refactoring operations, the potential benefits of using AI, and some of the challenges we face.
UI5con 2024 - Keynote: Latest News about UI5 and it’s EcosystemPeter Muessig
Learn about the latest innovations in and around OpenUI5/SAPUI5: UI5 Tooling, UI5 linter, UI5 Web Components, Web Components Integration, UI5 2.x, UI5 GenAI.
Recording:
https://www.youtube.com/live/MSdGLG2zLy8?si=INxBHTqkwHhxV5Ta&t=0
GraphSummit Paris - The art of the possible with Graph TechnologyNeo4j
Sudhir Hasbe, Chief Product Officer, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
8 Best Automated Android App Testing Tool and Framework in 2024.pdfkalichargn70th171
Regarding mobile operating systems, two major players dominate our thoughts: Android and iPhone. With Android leading the market, software development companies are focused on delivering apps compatible with this OS. Ensuring an app's functionality across various Android devices, OS versions, and hardware specifications is critical, making Android app testing essential.
Measures in SQL (SIGMOD 2024, Santiago, Chile)Julian Hyde
SQL has attained widespread adoption, but Business Intelligence tools still use their own higher level languages based upon a multidimensional paradigm. Composable calculations are what is missing from SQL, and we propose a new kind of column, called a measure, that attaches a calculation to a table. Like regular tables, tables with measures are composable and closed when used in queries.
SQL-with-measures has the power, conciseness and reusability of multidimensional languages but retains SQL semantics. Measure invocations can be expanded in place to simple, clear SQL.
To define the evaluation semantics for measures, we introduce context-sensitive expressions (a way to evaluate multidimensional expressions that is consistent with existing SQL semantics), a concept called evaluation context, and several operations for setting and modifying the evaluation context.
A talk at SIGMOD, June 9–15, 2024, Santiago, Chile
Authors: Julian Hyde (Google) and John Fremlin (Google)
https://doi.org/10.1145/3626246.3653374
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
SOCRadar's Aviation Industry Q1 Incident Report is out now!
The aviation industry has always been a prime target for cybercriminals due to its critical infrastructure and high stakes. In the first quarter of 2024, the sector faced an alarming surge in cybersecurity threats, revealing its vulnerabilities and the relentless sophistication of cyber attackers.
SOCRadar’s Aviation Industry, Quarterly Incident Report, provides an in-depth analysis of these threats, detected and examined through our extensive monitoring of hacker forums, Telegram channels, and dark web platforms.
What is Augmented Reality Image Trackingpavan998932
Augmented Reality (AR) Image Tracking is a technology that enables AR applications to recognize and track images in the real world, overlaying digital content onto them. This enhances the user's interaction with their environment by providing additional information and interactive elements directly tied to physical images.
Do you want Software for your Business? Visit Deuglo
Deuglo has top Software Developers in India. They are experts in software development and help design and create custom Software solutions.
Deuglo follows seven steps methods for delivering their services to their customers. They called it the Software development life cycle process (SDLC).
Requirement — Collecting the Requirements is the first Phase in the SSLC process.
Feasibility Study — after completing the requirement process they move to the design phase.
Design — in this phase, they start designing the software.
Coding — when designing is completed, the developers start coding for the software.
Testing — in this phase when the coding of the software is done the testing team will start testing.
Installation — after completion of testing, the application opens to the live server and launches!
Maintenance — after completing the software development, customers start using the software.
What is Master Data Management by PiLog Groupaymanquadri279
PiLog Group's Master Data Record Manager (MDRM) is a sophisticated enterprise solution designed to ensure data accuracy, consistency, and governance across various business functions. MDRM integrates advanced data management technologies to cleanse, classify, and standardize master data, thereby enhancing data quality and operational efficiency.
Atelier - Innover avec l’IA Générative et les graphes de connaissancesNeo4j
Atelier - Innover avec l’IA Générative et les graphes de connaissances
Allez au-delà du battage médiatique autour de l’IA et découvrez des techniques pratiques pour utiliser l’IA de manière responsable à travers les données de votre organisation. Explorez comment utiliser les graphes de connaissances pour augmenter la précision, la transparence et la capacité d’explication dans les systèmes d’IA générative. Vous partirez avec une expérience pratique combinant les relations entre les données et les LLM pour apporter du contexte spécifique à votre domaine et améliorer votre raisonnement.
Amenez votre ordinateur portable et nous vous guiderons sur la mise en place de votre propre pile d’IA générative, en vous fournissant des exemples pratiques et codés pour démarrer en quelques minutes.
Microservice Teams - How the cloud changes the way we workSven Peters
A lot of technical challenges and complexity come with building a cloud-native and distributed architecture. The way we develop backend software has fundamentally changed in the last ten years. Managing a microservices architecture demands a lot of us to ensure observability and operational resiliency. But did you also change the way you run your development teams?
Sven will talk about Atlassian’s journey from a monolith to a multi-tenanted architecture and how it affected the way the engineering teams work. You will learn how we shifted to service ownership, moved to more autonomous teams (and its challenges), and established platform and enablement teams.
Unveiling the Advantages of Agile Software Development.pdfbrainerhub1
Learn about Agile Software Development's advantages. Simplify your workflow to spur quicker innovation. Jump right in! We have also discussed the advantages.
2. 3
WebSphere MQ
IBM Software Group | WebSphere software
Agenda
64 Bit Buffer Pools
8 Byte Log RBA
Chinit SMF
SCM Storage (Flash)
Other things…
4
WebSphere MQ
IBM Software Group | WebSphere software
N
O
T
E
S
N
O
T
E
S
The this presentation the following topics are going to be covered:
64 Bit Buffer Pools,
8 Byte Log R B A,
Chinit S M F,
S C M Storage which is sometimes called Flash,
Then we will cover a few other smaller things that are in MQ version 8 on z OS.
Hopefully there will be time for questions for Matt and myself.
3. IBM Software Group WebSphere Software
64 Bit Buffer Pools
6
WebSphere MQ
IBM Software Group | WebSphere software
N
O
T
E
S
N
O
T
E
S
The first large feature in MQ version 8 is 64 bit buffer pools.
4. 7
WebSphere MQ
IBM Software Group | WebSphere software
Buffer Pools: What We Have Today
Q1
Page Set
4KB Page
Page Set
Q2
8
WebSphere MQ
IBM Software Group | WebSphere software
N
O
T
E
S
N
O
T
E
S
Lets start with looking at what we have today.
This slide shows a representation of the current use of buffer pools with queues and page sets.
A queue is configured to use a specific page set for the storage of messages. One or more page sets can
be configured to use a particular buffer pool to “buffer” the messages.
When a message is written to a queue, it is stored as one or more 4 K pages. The data is initially written
to the buffer pool, and may at some later stage be written out to the page set. The pages may be written
out to the page set if there are not enough pages in the buffer pool to contain all the messages on the
queues. When a message is got, the data is retrieved from the buffer pool, if the necessary pages are not
in the buffer pool, then they must first be retrieved from the page set, before the message can be
returned.
On this slide, there are two queues containing messages, which are using two different page sets,
however, these two page sets are both using the same buffer pool.
5. 9
WebSphere MQ
IBM Software Group | WebSphere software
Buffer Pools: What We Have Today
DATA
CODE
DATA
2 GB BAR
Queue Manager Address Space
Buffer Pool Buffer Pool Buffer Pool Buffer Pool Max 1.6GB for
Buffer Pools
16 EB
10
WebSphere MQ
IBM Software Group | WebSphere software
N
O
T
E
S
N
O
T
E
S
This slide shows a representation of the queue manager address space.
All of the queue manager code resides below the bar, in 31 bit storage.
There is various queue manager data which resides below the bar. In previous releases, some things
were moved into 64 bit storage, like locks, security data, the IGQ buffer etc.. Also, when new features
were added, they would often exploit 64 bit storage, for example pub sub.
Prior to MQ version 8 the buffer pools have had to be in 31 bit storage. The means that when taking into
account the code and data requirements already described, there would be up to a maximum of
approximately 1 point 6 gigabytes of available space for buffer pools (dependant on the common storage
usage on the system).
6. 11
WebSphere MQ
IBM Software Group | WebSphere software
Buffer Pools: The Problems
Not much space below the bar for buffer pools
Maximum 1.6GB, depending on common area
Put/Get with bufferpool = 'memory' speed
Put/Get from page set = 'disk' speed
Can spend a lot of time moving data around
Getting pages from page set into buffer pool
Putting pages from buffer pool into page set
This is detrimental for performance
A maximum of 16 buffer pools
Although up to 100 page sets are supported
Lots of time spent performing tuning
buffer pool sizes and queue/buffer pool/page set mappings
12
WebSphere MQ
IBM Software Group | WebSphere software
N
O
T
E
S
N
O
T
E
S
There is not much space below the bar for buffer pools once queue manager code and data is taken into
account. There is generally a maximum of 1 point 6 gigabytes available for buffer pools depending on
common storage usage.
Putting and getting messages into the buffer pool works at 'memory' speed, where as putting and getting
messages from the page set works at 'disk' speed.
For scenarios where several applications read and or write large numbers of messages to the same buffer
pool, a lot of time is spent getting pages from the page set into the buffer pool and putting pages from the
buffer pool into the page set. This is detrimental for performance.
A maximum of 16 buffer pools are supported while up to 100 page sets are supported, meaning that if you
have more than 16 page sets, you need to share the same buffer pool for some of the page sets.
Because of these reasons, a lot of time and effort can be spent in tuning the buffer pool sizes, and the
mapping of queue to page sets and page sets to buffer pools.
7. 13
WebSphere MQ
IBM Software Group | WebSphere software
Buffer Pools: Using 64 bit storage
DATA
CODE
DATA
2 GB BAR
Queue Manager Address Space
Buffer Pool
Buffer Pool Buffer Pool Buffer Pool
Max 1.6GB for
Buffer Pools
16 EB
Buffer Pool
14
WebSphere MQ
IBM Software Group | WebSphere software
N
O
T
E
S
N
O
T
E
S
This slide shows a representation of the queue manager address space, similar to the previous diagram,
but this time using 64 bit storage for the buffer pools.
Other storage usage has remained the same, with queue manager code, and some of the data remaining
in 31 bit storage. However, being able to support 64 bit storage for buffer pools means that they may be
moved out of 31 bit storage, relieving the constraint for other uses of the 31 bit storage. The diagram
shows that buffer pools may continue to use 31 bit storage, providing a migration path to using 64 bit
storage. The diagram also shows that because of the greater availability of storage above the bar, the
sizes of the buffer pools may be increased, not being constrained by the 1 point 6 gigabytes overall
storage availability.
8. 15
WebSphere MQ
IBM Software Group | WebSphere software
64 Bit Buffer Pools: The Solution
Allow buffer pools to be above the bar.
Buffer pools can (theoretically) make use of up to 16 EB of storage
Increase maximum size of buffer pool
if above the bar
Allow up to 100 buffer pools
Can have a 1-1 page set to buffer pool mapping
16
WebSphere MQ
IBM Software Group | WebSphere software
N
O
T
E
S
N
O
T
E
S
MQ version 8 introduces the capability of having buffer pools above the bar. This means that buffer pools
can - theoretically, make use of up to 16 exabytes of storage.
The maximum size of an individual buffer pool has also increased to 9 9s buffers, if the buffer pool is
above the bar. The previous limit was 500,000 buffers, which remains in force if the buffer pool is located
below the bar.
The maximum number of buffer pools has been increased from 16 to 100. This enables a 1 to 1 mapping
of page sets to buffer pools.
9. 17
WebSphere MQ
IBM Software Group | WebSphere software
64 Bit Buffer Pools: What Has Changed?
LOCATION/LOC attribute specifies where the buffer pool is
located
BELOW: The default. Buffer pool is located below the bar in 31 bit storage
ABOVE: Buffer pool is located above the bar in 64 bit storage
This can be altered dynamically
BUFFERS attribute has a valid range of up to 999,999,999
if LOCATION(ABOVE) set
Buffer pool ID can be in range of 0 – 99
PAGECLAS attribute enables permanent backing by real
storage for maximum performance
4KB – The default, uses page-able 4KB buffers
FIXED4KB – Uses permanently page fixed 4KB buffers. Only supported if
LOC(ABOVE) is specified
18
WebSphere MQ
IBM Software Group | WebSphere software
N
O
T
E
S
N
O
T
E
S
A new attribute called LOCATION, or LOC for short has been added to the buffer pool definition. This
enables the location, relative to the bar to be specified. A value of “BELOW” indicates that the buffer pool
should be below the bar in 31 bit storage, this is the default, and matches what was available in MQ
version 7 point 1 and earlier releases. A value of “ABOVE” indicates that the buffer pool should be located
above the bar in 64 bit storage. The LOCATION value can be altered dynamically, and the queue manager
will then dynamically move the buffer pool to the new location.
The buffer pool BUFFERS attribute now has an extended valid range, being able to accept a value up to 9
9s if LOCATION ABOVE has been set.
The buffer pool ID attribute can now have a valid range of 0 to 99.
A new attribute called PAGE CLAS has been added to the buffer pool definition. This attribute enables 64
bit buffer pools to be configured to be permanently backed by real storage for maximum performance. The
default value of 4 K means that the buffer pool will be page-able. Using a value of FIXED 4KB means that
MQ does not have to page fix, page unfix buffers when doing I O. This can give significant performance
benefits if the buffer pool is under stress, and therefore doing lots of reads from, writes to page set.
There is a WARNING, storage will be page fixed for the life of the queue manager. Ensure you have
sufficient real storage otherwise other address spaces might be impacted.
10. 19
WebSphere MQ
IBM Software Group | WebSphere software
64 Bit Buffer Pools: Migration
To use this function OPMODE(NEWFUNC,800) must be set
Otherwise behaviour is same as in version 7
Although LOCATION(BELOW) is valid regardless of OPMODE
Some messages have changed regardless of the value of
OPMODE
20
WebSphere MQ
IBM Software Group | WebSphere software
N
O
T
E
S
N
O
T
E
S
To exploit 64 bit storage for the buffer pools, the queue manager must be running in version 8 NEWFUNC
mode. However, LOCATION BELOW can be specified when running in COMPAT mode.
Some console messages have changed, regardless of OPMODE. For example to display the location of
the buffer pool when using the DISPLAY USAGE P S I D command.
11. 21
WebSphere MQ
IBM Software Group | WebSphere software
64 Bit Buffer Pools: Configuration
CSQINP1
DEFINE BUFFPOOL(22) LOCATION(ABOVE) BUFFERS(1024000) REPLACE
DEFINE BUFFPOOL(88) BUFFERS(12000) REPLACE
CSQINP1 or dynamically with dsn
DEFINE PSID(22) BUFFPOOL(22)
CSQINP2 or dynamically
ALTER BUFFPOOL(88) LOC(ABOVE)
CSQP024I !MQ21 Request initiated for buffer pool 88
CSQ9022I !MQ21 CSQPALTB ' ALTER BUFFPOOL' NORMAL COMPLETION
CSQP023I !MQ21 Request completed for buffer pool 88, now has 12000 buffers
CSQP054I !MQ21 Buffer pool 88 is now located above the bar
22
WebSphere MQ
IBM Software Group | WebSphere software
N
O
T
E
S
N
O
T
E
S
This slide shows the enhanced buff pool commands. The places that above the bar buffer pools are
defined at the same as when they are below the bar. MQ allows a buffer location to be moved dynamically.
This slide shows the console messages when this is done.This slide shows the enhanced buff pool
commands. The places that above the bar buffer pools are defined at the same as when they are below
the bar. MQ allows a buffer location to be moved dynamically. This slide shows the console messages
when this is done.
12. 23
WebSphere MQ
IBM Software Group | WebSphere software
64 Bit Buffer Pools: Configuration
CSQINP2 or dynamically
DISPLAY USAGE PSID(*)
CSQI010I !MQ21 Page set usage …
<REMOVED>
End of page set report
CSQI065I !MQ21 Buffer pool attributes ...
Buffer Available Stealable Stealable Page Location
pool buffers buffers percentage class
_ 0 1024 1000 99 4KB BELOW
_ 22 1024000 234561 23 FIXED4KB ABOVE
_ 88 12000 1200 10 4KB ABOVE
End of buffer pool attributes
24
WebSphere MQ
IBM Software Group | WebSphere software
N
O
T
E
S
N
O
T
E
S
An this is the new output of DISPLAY USAGE PSID. The page class and location can be seen.
13. 25
WebSphere MQ
IBM Software Group | WebSphere software
64 Bit Buffer Pools: Performance Summary
Material
More material
• 16 CP LPAR
• Each transaction puts and gets a random message from a pre loaded queue.
Second test requires a doubling in buffer pool size
26
WebSphere MQ
IBM Software Group | WebSphere software
N
O
T
E
S
N
O
T
E
S
The previous slide shows two tables comparing the performance of 31 bit buffer pools and 64 bit buffer
pools.
The first table shows the results when running tests using a single requester on a queue. There is a
small increase in transaction cost when using 64 bit buffer pools vs 31 bit buffer pools, with the CPU
microseconds increasing from 35.92 to 37.48. However, when we increase the number of buffers in use
in the 64 bit case, the channel path %busy drops to nearly 0, indicating that we are no longer needing to
access the page set, and all requests are satisfied from the buffer pool. The transaction rate has also
increased by about 40%.
The second table shows that when using two requesters against the queue, there is a high channel path
%busy rate, of about 75%, for both the 31 bit and 64 bit buffer pool case. However, when extra buffers
are added in the 64 bit case, this channel path busy drops to nearly 0 and the transaction rate more than
doubles. The LPAR busy % also increases from about 43% to very close to 100%, showing that we are
now driving the system as hard as possible.
Being able to provide more buffers by using 64 bit storage means that we can drive the system much
more efficiently for a slight increase in per transaction cost.
14. IBM Software Group WebSphere Software
8 Byte Log RBA
28
WebSphere MQ
IBM Software Group | WebSphere software
N
O
T
E
S
N
O
T
E
S
The next large feature that we have done in MQ version 8 is to increase the size of the log range.
15. 29
WebSphere MQ
IBM Software Group | WebSphere software
8 byte log RBA: The Problem
MQ for z/OS V7.1 (or earlier):
Implements a 6 byte Log RBA (Relative Byte Address)
This give an RBA range of 0 to x'FFFFFFFFFFFF' (= 255TB)
Some customers reach this limit in 12 to 18 months
At 100MB/sec, log would be full in 1 month
If we reach the end of the Log RBA range (i.e. “wrap”):
queue manager terminates and requires a cold start – a disruptive outage !
Potential for loss of persistent data
To avoid an outage:
Run CSQUTIL RESETPAGE, at regular planned intervals, to RESET the LOG RBA
30
WebSphere MQ
IBM Software Group | WebSphere software
N
O
T
E
S
N
O
T
E
S
With MQ version 7 point 1 or earlier the relative byte address, RBA is 6 bytes long. MQ uses RBA to track
log records, when MQ handles persistent data or units of work which envolves writing to the log the RBA
is increases depending on the size of the log record. The length of 6 bytes gives a range from zero to 12
Fs which is approximately 255 terabytes.
Although this sounds like a large amount if data, with speeds of the modern, z systems the throughput that
customers are pumping through MQ some customers have been reaching this limit in between 12 and 18
months. If MQ is writing to the log constantly at 100 MB per second which is achievable on the modern
hardware it would only take a month to reach the end of the log. At which point the queue manager
terminates and requires a cold start, which also means that persistent data may be lost.
MQ does provide a procedure to avoid a distruptive outage. MQ has a RESET PAGE function on
CSQUTIL to reset the log RBA, this should be planned at regular intervals but does require the queue
manager to be stopped while the utility is run.
16. 31
WebSphere MQ
IBM Software Group | WebSphere software
N
O
T
E
S
N
O
T
E
S
Other notes
With APAR PM48299, WebSphere MQ V7.0.1 (and above) queue managers will issue more message to
warn of the log being exhausted.
CSQI045I/CSQI046E/CSQI047E to warn if RBA is high (>= x'700000000000')
– CSQI045I, when RBA is x'700000000000', x'7100..', x'7200..' and x'7300..'
– CSQI046E when RBA is x'740000000000', x'7500..', x'7600..' and x'7700..'
– CSQI047E when RBA is x'780000000000', x'7900..', x'nn00..' and x'FF00..'
CSQJ031D on restart to confirm queue manager start-up even though log RBA has passed
x'FF8000000000‘
Terminates, to prevent loss of data, if log RBA reaches x'FFF800000000'. The LOG RBA will need to be
RESET to allow the queue manager to continue to operate
These RBA trigger levels have been changed in MQ V8 with 6 byte RBA:
– CSQI045I, when RBA is x'F00000000000'
– CSQI046E when RBA is x'F80000000000'
– CSQI047E when RBA is x'FF8000000000'
Different trigger points exists with 8 byte log RBA.
17. 33
WebSphere MQ
IBM Software Group | WebSphere software
8 byte log RBA: The Solution
Upper limit on logical log will be 64K times bigger
At 100MB/sec this will now take about 5578 years to fill!
BSDS and log records changed to handle 8 byte RBAs and
URIDs
Utilities or applications that read the following are impacted:
The BSDS
The Logs
Console messages that contain the log RBA or URID
queue manager will use 6 byte RBA until 8 byte RBA is enabled
BSDS conversion utility (same model as used by DB2) to migrate to 8 byte RBA
34
WebSphere MQ
IBM Software Group | WebSphere software
N
O
T
E
S
N
O
T
E
S
In MQ version 8 we have extended the log RBA from 6 to 8 bytes which doesn't sound a lot but the log
is approximately 65,000 times larger. Thinking about the logging rate example it would take 5,576 years
to fill at 100 megabytes per second.
This feature has changed the BSDS and log records to handle the new 8 byte RBAs and URIDs.
Utilities or applications that read the BSDS, the logs, or the console messages, that contain the log
RBA or URID are impacted.
When the queue manager is running on MQ version 8, it are been designed to use 6 byte log RBA until
8 byte log RBA has been enabled. This allows both backwards migration and co-existance with queue
managers are a prior release. The console messages at version 8 show the full 8 bytes regardless to
whether 8 byte log RBA is enabled, the 6 byte log RBA have four zeroes prepended. MQ has provided
a BSDS conversion utility to migrate to 8 bytes. This is a one way migration process and its the same
model that DB2 uses.
18. 35
WebSphere MQ
IBM Software Group | WebSphere software
8 byte log RBA: Migration
Get MQ v8.0 running in NEWFUNC mode (you promise not to
fall back to V7)
In QSG, entire group must be v8.0 NEWFUNC
STOP queue manager
Run the new CSQJUCNV utility
utility will check that entire QSG is at correct level
copies data from old primary / secondary BSDS pair to new pair
old data is NOT changed for fallback
Restart queue manager with new 8 byte format BSDS
queue manager will ONLY start if in NEWFUNC mode
New message CSQJ034I issued indicating 8 byte RBA mode
all subsequent log data in new format
36
WebSphere MQ
IBM Software Group | WebSphere software
N
O
T
E
S
N
O
T
E
S
So how do you use this log RBA change to practically elimimate the need to reset the RBA.
A queue manager has to be running MQ version 8 and in OPMODE newfunc, this tells us that you
promise not to fall back to a prior version. If the queue manager is part of a QSG, the entire group must
be at version 8 newfunc.
First the queue manager would have to be stopped cleanly.
Then the new CSQJU CONV utility would be run which does some QSG checks to ensure the levels
are correct. CSQJU CONV will then convert the BSDS data from old primary / secondary BSDS pair to
new pair. The old data is not changed in case of needing to fall back when the queue manager starts.
Finally restart the queue manager with the new 8 byte BSDSs. The queue manager will ONLY start if in
NEWFUNC mode. A new console message, CSQJ034I, will indicate that the queue manager is running
in 8 byte RBA mode. All subsequent log data will be written in the new format.
19. IBM Software Group WebSphere Software
CHINIT SMF
38
WebSphere MQ
IBM Software Group | WebSphere software
N
O
T
E
S
N
O
T
E
S
The third feature that I am going to talk about is SMF. In version 8 we have added the recording of SMF
data from the CHINIT address space.
20. 39
WebSphere MQ
IBM Software Group | WebSphere software
Chinit SMF: The Problem
Prior to MQ v8.0 no SMF data for Chinit address space or
channel activity
Many customers have had to create their own ‘monitoring’ jobs
with periodic DISPLAY CHSTATUS
Difficult to manage historical data, perform capacity planning
and investigate performance issues
40
WebSphere MQ
IBM Software Group | WebSphere software
N
O
T
E
S
N
O
T
E
S
So, what is the problem that this feature is aimed to solve?
Prior to MQ version 8, no SMF data for the chinit address space or any channel activity was available.
Many customers have had to create their own 'monitoring' jobs that does a periodic DISPLAY
CHANNEL STATUS. This has it's own problems like how to manage historical data. This also does not
help with capacity planning and investigating performance issues.
21. 41
WebSphere MQ
IBM Software Group | WebSphere software
Chinit SMF: The Solution
Additional SMF data for CHINIT address space and channel
activity to enable:
Monitoring
Capacity planning
Tuning
Separate controls from queue manager SMF allows 'opt in'
Updated MP1B SupportPac will format the data and introduces
rule based reporting
42
WebSphere MQ
IBM Software Group | WebSphere software
N
O
T
E
S
N
O
T
E
S
In MQ version 8, the chin it has been extended to produce SMF data. This includes data about the
address space and channel activity. This enables MQ administrators to perform:
monitoring,
capacity planning,
and tuning.
There are separate controls from the queue manager SMF. This allows 'opt-in' type controls, depending
on what data is required..
The SupportPac MP1B which is produced here in Hursley, has been updated to format the data and
introduces rule based reporting.
22. 43
WebSphere MQ
IBM Software Group | WebSphere software
CHINIT SMF: Controls
You can START CHINIT STAT trace by:
You can START CHINIT ACCTG trace by:
You can display trace by:
ALTER and STOP TRACE commands have also been updated
!MQ08 START TRACE(STAT) CLASS(4)
CSQW130I !MQ08 'STAT' TRACE STARTED, ASSIGNED TRACE NUMBER 05
CSQ9022I !MQ08 CSQWVCM1 ' START TRACE' NORMAL COMPLETION
!MQ08 DISPLAY TRACE(*)
CSQW127I !MQ08 CURRENT TRACE ACTIVITY IS -
TNO TYPE CLASS DEST USERID RMID
02 STAT 01 SMF * *
05 STAT 04 SMF * *
06 ACCTG 04 SMF * *
END OF TRACE REPORT
!MQ08 START TRACE(ACCTG) CLASS(4)
CSQW130I !MQ08 ‘ACCTG' TRACE STARTED, ASSIGNED TRACE NUMBER 06
CSQ9022I !MQ08 CSQWVCM1 ' START TRACE' NORMAL COMPLETION
44
WebSphere MQ
IBM Software Group | WebSphere software
N
O
T
E
S
N
O
T
E
S
The chinit SMF has been defined as trace class 4.
So to start the trace for the address space we can use the START TRACE STAT CLASS 4.
To start the channel activity SMF the command START TRACE A C C T G CLASS 4.
In the display trace extract shown on this slide both the STAT and A C C T G for class zero 4 is active.
The ALTER TRACE and STOP TRACE command have been updated to support class 4.
23. 45
WebSphere MQ
IBM Software Group | WebSphere software
CHINIT SMF: Controls
SMF data collected on SMF interval of QMGR
Can be same as z/OS SMF interval
Chinit STAT trace allows high level view of activity
Chinit ACCTG trace allows detailed view at channel level
– STATCHL attribute on queue manager to control system wide setting
– STATCHL attribute on channel to control granularity
– STATACLS attribute on the queue manager controls automatically defined
cluster sender channels.
– Data collected is a superset of that collected on distributed with STATCHL
message
46
WebSphere MQ
IBM Software Group | WebSphere software
N
O
T
E
S
N
O
T
E
S
The SMF data is collected on the same SMF interval as the queue manager. This can be the same as
the z OS SMF interval.
The STAT trace option allows a high level view of the chinit activity.
The A C C T G trace option allows a detailed view at the channel level. A new attribute has been added
to the channel called STAT CHL. This allows the granularity at the channel level. There is also a STAT
CHL attribute on the queue manager so a channel can inherit a system wide setting. The queue
manager object also has a STAT ACLS which sets the STAT CHL Value for automatically defined
cluster sender channels.
The amount of data collected is a superset of that collected on the distributed platforms with the STAT
CHL message.
24. 47
WebSphere MQ
IBM Software Group | WebSphere software
CHINIT SMF: New record subtypes and DSECTs
SMF 115
Subtype 231 (0xE7=‘X’) for CHINIT control information
e.g. adapter and dispatcher CPU time etc. to help with tuning numbers of tasks configured
DNS resolution times
SMF 116
Subtype 10 for per channel 'accounting' data
Bytes sent, achieved batch size etc.
New DSECTs
CSQDQWSX (QWSX) : Self defining section for subtype 231 which consists of:
CSQDQWHS (QWHS) : Standard header
CSQDQCCT (QCCT) : Definition for CHINIT control information
CSQDQWS5 (QWS5) : Self defining section for subtype 10 which consists of:
CSQDQWHS (QWHS) : Standard header
CSQDQCST (QCST) : Definition for channel status data
48
WebSphere MQ
IBM Software Group | WebSphere software
N
O
T
E
S
N
O
T
E
S
Two new SMF records have been added.
SMF 1 1 5, sub type 2 3 1 has the chinit control information like adapter and dispatcher CPU time. This
helps with tuning the number of tasks configured. This record also has the DNS resolution times.
There is one SMF 1 1 6, sub type 10 record per channel accounting data so this would includes bytes
sent and batch information.
There are new d sects. the CSQDQWSX is a self defining section for sub type 2 3 1 records which
consists of the CSQDQWHS standard header and then the CSQDQCCT d sect that defines the
CHINIT control information.
The CSQDQWSS is a self defining section for sub type 10 records which consists of the CSQDQWHS
standard header and then the channel status data in the CSQDQCST d sect.
25. 49
WebSphere MQ
IBM Software Group | WebSphere software
CHINIT Summary
MV45,MQ20,2014/04/08,20:43:57,VRM:800,
From 2014/04/08,20:41:54.984681 to 2014/04/08,20:43:57.237939 duration
122.253258 seconds
Number of current channels..................... 20
Number of active channels .... ................ 20
MAXCHL. Max allowed current channels........... 602
ACTCHL. Max allowed active channels............ 602
TCPCHL. Max allowed TCP/IP channels............ 602
LU62CHL. Max allowed LU62 channels............. 602
Storage used by Chinit......................... 22MB
50
WebSphere MQ
IBM Software Group | WebSphere software
N
O
T
E
S
N
O
T
E
S
The next few slides show output of the CHINIT's SMF data, this has been formatted by support pac
MP1B - other formatters are available.
The output is taken from one of our test systems. On this slide, the summary of the data from the chin
it information is shown. This chinit has 20 active channels running and the address space is using 22
megabytes of memory.
26. 51
WebSphere MQ
IBM Software Group | WebSphere software
Dispatchers
52
WebSphere MQ
IBM Software Group | WebSphere software
N
O
T
E
S
N
O
T
E
S
The example data shows that there were five dispatchers. A channel is associated with a dispatcher,
and the work is distributed across all the dispatchers. This example shows that one dispatcher is
processing more requests than other dispatchers. This is normal, as some channels might stop, so the
dispatcher is processing fewer channels, and some channels can be busier than others.
When the CHIN IT starts, MQ assigns 10 channels to each dispatcher and once all dispatchers have
the same number of channels, the CHINIT will start added more channels to a dispatcher. A dispatcher
works better when it has many channels running on it. This report also shows the average time per
request.
The lower part of the report shows the distribution of the channels.
This is the report that would help a MQ administrator work out if there are enough dispatchers.
27. 53
WebSphere MQ
IBM Software Group | WebSphere software
Adapters
54
WebSphere MQ
IBM Software Group | WebSphere software
N
O
T
E
S
N
O
T
E
S
This shows an example of the adapter report.
The adapters process MQ requests. Some of these requests might wait, for example, for log I O during
a commit, so the difference between the average CPU time and average Elapsed Time per request can
be quite large. When a MQ request is made the first free adapter task is used. MQ usually picks the
first free adapter which accounts for the high number of requests on the first adapter.
This is the report that an MQ administrator would use to ensure that there are enough adaptor TCBs
defined as a channel should not need to wait for an adapter.
28. 55
WebSphere MQ
IBM Software Group | WebSphere software
Channel Information (1)
56
WebSphere MQ
IBM Software Group | WebSphere software
N
O
T
E
S
N
O
T
E
S
This is the top half of the information about a channel.
This is a private sender called MQ89 under bar 1. Although STATCHL has 5 possible values, on z OS
MQ Treats this as on or off. The channel shown has on STATCHL so we are seeing the most accurate
channel data. There have been 2998 messages sent in this SMF time slice, 1506 of which were
persistent, so we can conclude the other 1494 were non-persistent.
The batch target is 50 messages but in this SMF time slice 38 point 9 messages have been achieved
per batch. We can also see the number of full and partial batches.
29. 57
WebSphere MQ
IBM Software Group | WebSphere software
Channel Information (2)
127.0.0.1 MQ89_1 Message data 17,198,653 16 MB
127.0.0.1 MQ89_1 Persistent message data 4,251,780 4 MB
127.0.0.1 MQ89_1 Non persistent message data 12,946,873 12 MB
127.0.0.1 MQ89_1 Total bytes sent 17,200,221 16 MB
127.0.0.1 MQ89_1 Total bytes received 3,052 2 KB
127.0.0.1 MQ89_1 Bytes received/Batch 39 39 B
127.0.0.1 MQ89_1 Bytes sent/Batch 223,379 218 KB
127.0.0.1 MQ89_1 Batches/Second 0
127.0.0.1 MQ89_1 Bytes received/message 1 1 B
127.0.0.1 MQ89_1 Bytes sent/message 5,737 5 KB
127.0.0.1 MQ89_1 Bytes received/second 25 25 B/sec
127.0.0.1 MQ89_1 Bytes sent/second 140,985 137 KB/sec
127.0.0.1 MQ89_1 Compression rate 0
127.0.0.1 MQ89_1 Exit time average 0 uSec
127.0.0.1 MQ89_1 DNS resolution time 0 uSec
127.0.0.1 MQ89_1 Net time average 312 uSec
127.0.0.1 MQ89_1 Net time min 43 uSec
127.0.0.1 MQ89_1 Net time max 4,998 uSec
127.0.0.1 MQ89_1 Net time max date&time 2014/04/08,19:43:52
58
WebSphere MQ
IBM Software Group | WebSphere software
N
O
T
E
S
N
O
T
E
S
The bottom half of the channel information shows the number of bytes that have been sent. From what
we saw on the previous slide and this information we can conclude that on average the non-persistent
messages are larger than the persistent messages.
The Support Pac MP1B also does some helpful calculations and gives the bytes sent or received per
message or second. As this is a sender type channel the receive data is small as we would expect.
30. IBM Software Group WebSphere Software
SCM Storage (Flash)
60
WebSphere MQ
IBM Software Group | WebSphere software
N
O
T
E
S
N
O
T
E
S
The next feature that I am going to talk about is SCM storage which is sometimes called flash. This is a
change that the CF team have developed to help a QSG store messages. The cost of storing
messages in the CF is reduced but some memory is needed spent to start achieve those gains.
This is not a true version 8 feature because all the configuration is done at the CF level, with the right
hardware and software. This means that this CF flash memory can be used with MQ version 8 as well
as prior versions
31. 61
WebSphere MQ
IBM Software Group | WebSphere software
Flash
CFStruct
CF Flash: Scenarios Planned Emergency Storage
70%
80%
90%
offload >32kB
offload > 0kB
offload > 4kB
SMDS
ALL
CFSTRUCT OFFLOAD rules
cause progressively smaller
messages to be written to SMDS
as the structure starts to fill.
Once 90% threshold is reached,
the queue manager is storing the
minimum data per message to
squeeze as many message
references as possible into the
remaining storage.
CF Flash algorithm also starts
moving the middle of the queue
out to flash storage, keeping the
faster 'real' storage for messages
most likely to be gotten next.
62
WebSphere MQ
IBM Software Group | WebSphere software
N
O
T
E
S
N
O
T
E
S
This first slide shows a planned emergency storage scenario.
We are using the CF mainly for our messages but we have configured SMDS to hold messages once
some C F STRUCT OFFLOAD rules come into play. The rules cause progressively smaller messages
to be written to SMDS as the structure starts to fill up.
Once the 90% threshold is reached, the queue manager is storing the minimum data per message to
squeeze as many message references as possible into the remaining storage.
CF team have used queuing theory to develop the Flash algorithm. Messages that have been put most
recently and have the the lowest priority are the most unlikely to be got next, so the CF also starts
moving the middle of the queue out to flash storage, keeping the faster 'real' storage for messages
most likely to be gotten next.
32. 63
WebSphere MQ
IBM Software Group | WebSphere software
Flash
CFStruct
CF Flash: Scenarios Maximum Speed
20%
80%
90%
offload threshold 64kB(*)
offload 64kB => disabled
offload disabled
SMDSlarge msgs
We want to keep high performance
messages in the CF for most rapid
access.
CFSTRUCT OFFLOAD are configured
with special value '64k' to turn them off.
(*) you might choose to use one rule to
alter the 'large data' threshold from 63kB
down
Once 90% threshold is reached, the CF
Flash algorithm starts moving the middle
of the queue out to flash storage, keeping
the faster 'real' storage for messages
most likely to be gotten next.
As messages are got and deleted, the CF
flash algorithm attempts to pre-stage the
next messages from flash into the
CFSTRUCT so they are rapidly available
for MQGET.
In this scenario the flash storage acts like
an extension to 'real' CFSTRUCT storage.
However it will be consumed more rapidly
since all small message data is stored in
it.
64
WebSphere MQ
IBM Software Group | WebSphere software
N
O
T
E
S
N
O
T
E
S
In the next scenario, we want to keep high performance messages in the CF for most rapid access.
The CFSTRUCT OFFLOAD rules are configured with special value 64 K to turned off. A MQ
administrator may choose to use one rule to alter the large data threshold from 63 K down.
Once the 90% threshold is reached, the CF Flash algorithm starts moving the middle of the queue out
to flash storage, keeping the faster - real storage for messages most likely to be gotten next.
As messages are got and deleted, the CF flash algorithm attempts to pre-stage the next messages
from flash into the C F STRUCT so they are rapidly available for MQGET.
In this scenario the flash storage acts like an extension to, real C F STRUCT storage. However it will
be consumed more rapidly since all small message data is stored in it.
33. 65
WebSphere MQ
IBM Software Group | WebSphere software
CF Flash: Storage
Scenario Msg Size Total Msgs # in 'real' SMDS space # in 200 GB
flash
Augmented
(limit 30GB)
No SMDS
No Flash
1kB 3M 3M
4kB 900,000 900,000
16kB 250,000 250,000
SMDS
No Flash
1kB 3.2M 3.2M 800MB
4kB 1.8M 1.8M 5GB
16kB 1.3M 1.3M 20GB
“Emergency”
Scenario
1kB 190M 2M 270GB 190M 30GB
4kB 190M 600,000 850GB 190M 30GB
16kB 190M 150,000 3TB 190M 30GB
“Speed”
Scenario
1kB 150M 2M 150M 26GB
4kB 48M 600,000 48M 8GB
16kB 12M 150,000 12M 2GB
66
WebSphere MQ
IBM Software Group | WebSphere software
N
O
T
E
S
N
O
T
E
S
In the table show: we define a CFSTRUCT with SIZE 4 gigabytes, We have a maximum of 200 gigabytes flash
memory available, but we decide not to use more than an additional 30 gigabytes of augmented storage.
The table considers 4 scenarios, and shows the effect of message size in each, the amount in real, estimates
how many MQ messages in CF real storage.
First scenario has no flash nor SMDS. Entire structure is available to store messages. A total of 250,000 16
kilobyte messages can be stored.
Second scenario introduces SMDS offload with default rules. The 1k messages don't got to SMDS til 90% full,
but the 4k and 16k cases both start offloading at 80%. The CF space released by offloading data can hold more
message pointers, so the 1k case doesn't increase number of messages greatly, but 4k number doubles, and
16k gets 5 times as many to 1 point 3 million.
The third scenario is our “Emergency Flash”. Because flash is configured, there is less 'real' storage for storing
data, I've assumed 1 gigabytes less for 200 gigabytes flash. Here flash only holds the message references.
This means the message size in flash is small. Our experiments show about 175 bytes of Augmented storage
per message in flash. We have chosen to limit Augmented storage to 30 gigabytes, so message numbers are
limited by augmented storage. The SMDS holds the actual data. In this scenario 190 messages can be stored.
The last scenario is entitled “Max Speed”. All message sizes are below SMDS threshold, so SMDS not used.
Limit is now how many messages will fit in 200 gigabytes flash. We show approximate size of augmented
storage needed to support these volumes of messages in flash. This gives us space for 12 million messages.
The numbers are estimates only, based on our limited test scenarios, and CFSIZER data.
34. 67
WebSphere MQ
IBM Software Group | WebSphere software
CFLEVEL(4) using 8KB Messages
Saw-tooth effect occurs when capture task goes into retry mode due to “storage medium full” reason code.
Even with these 5 second pauses, the non-SCM capable workload completes in 90% of the time of the SCM
capable workload.
Cost of workload in MVS differs by less than 2%.
Get rate once the capture task has completed:
No SCM: 21100 messages / second ~ 164MB/sec
SCM: 19000 messages / second ~ 148MB/sec
0
1 0 0 0 0 0
2 0 0 0 0 0
3 0 0 0 0 0
4 0 0 0 0 0
5 0 0 0 0 0
6 0 0 0 0 0
7 0 0 0 0 0
8 0 0 0 0 0
9 0 0 0 0 0
1 0 0 0 0 0 0
C F L E V E L ( 4 ) 8 K M e s s a g e s - X M I T Q u e u e d e p t h
N o S C M a v a i l a b le S C M a v a il a b l e D e p t h t h a t S C M u s e d
T i m e ( s e c o n d s )
68
WebSphere MQ
IBM Software Group | WebSphere software
N
O
T
E
S
N
O
T
E
S
The graph demonstrates that put and get rates do not significantly alter as CF 'real' storage overflows and data
is offloaded to, or read back from, CF flash. This is demonstrated by comparing the slopes of the red and blue
lines and noticing no significant kink in the red line as we overflow CF 'real' the 90% threshold.
The scenario being tested uses CFLEVEL 4 so SMDS config was not required. However, it corresponds
identically with our “Max Speed” scenario.
The test case has a message generator task putting 8 K messages on to a transmission queue. A channel is
getting these in FIFO order. The message generator pauses for a few seconds when it hits storage media full
leading to the saw-tooth shape of the blue line where no Flash memory is available.
The red dotted line indicates the threshold at which messages in the red line test, started to be written to flash
storage. Notice that it is significantly lower than 90% of the 'media full' blue peaks, because some of the
structure storage has gone to control flash, and the threshold is 90% of the remainder.
Final point is that CPU costs are not significantly different whether flash is being used or not.
From performance perspective, using sequential queues, flash memory behaves like CF real.
35. IBM Software Group WebSphere Software
Other z/OS Items
70
WebSphere MQ
IBM Software Group | WebSphere software
N
O
T
E
S
N
O
T
E
S
The final section of this presentation will cover other improvements that we have made to MQ version 8.
36. 71
WebSphere MQ
IBM Software Group | WebSphere software
64bit application support
64 bit application support for C language
no 64bit COBOL
LP64 compile option
supported by cmqc.h
restricted environments
batch, TSO, USS
CICS® and IMS® do not support 64bit apps
WebSphere Application Server already 64bit
must use sidedeck & DLL, not stubs:
csqbmq2x (uncoordinated batch & USS)
csqbrr2x (RRS coordinated, srrcmit())
csqbri2x (RRS coordinated, MQCMIT)
72
WebSphere MQ
IBM Software Group | WebSphere software
N
O
T
E
S
N
O
T
E
S
MQ now supports 64 bit applications written in the C language. There is no support for COBOL, PL1 or
assembler. The LP64 option has to be used. The MQ C header file cmqc dot h supports both 31 and 64 bit
applications.
The support is restricted to batch, T S O and USS. There is no support for kicks and IMS 64 bit applications.
WebSphere Application Server already connects with 64 bit connections.
In order to use the 64 support the program have to be linked with side deck and DLL, not the stubs.
37. 73
WebSphere MQ
IBM Software Group | WebSphere software
Client Attachment Feature
Client attachment feature no longer exists in MQ v8.0
Client capability available by default
Use CHLAUTH rules to protect QMGR if you didn’t previously use clients
Client attachment feature also now non-chargeable on previous
releases
APAR also available to enable functionality without installing CAF
74
WebSphere MQ
IBM Software Group | WebSphere software
N
O
T
E
S
N
O
T
E
S
The Client Attachment Feature that has been required on previous versions of MQ to connect client applications
into a z OS queue manager, no longer exists in MQ version 8. The client capability exists by default on a version
8 queue manager. If you did not previously use client applications into a z OS queue manager, you can consider
using CHanneL AUTH rules to protect the queue manager.
38. 75
WebSphere MQ
IBM Software Group | WebSphere software
Other z/OS Items
Message suppression EXCLMSG
formalized a service parm which suppressed Client channel start & stop messages
extended to be generalized and applicable for most MSTR and CHIN messages
DNS reverse lookup inhibit REVDNS
response to customer requirement for workaround when DNS infrastructure
impacted
zEDC compression hardware exploitation for
MSGCOMP(ZLIBFAST)
need zEC12 GA2 + zEDC card
can yield higher throughput & reduced CPU for SSL channels
76
WebSphere MQ
IBM Software Group | WebSphere software
N
O
T
E
S
N
O
T
E
S
Version 8 has also added message suppression.
The EXCLude MeSsaGe is a z Parm value, but can also be specified using the SET SYSTEM command. This
specifies a list of message identifiers to be excluded from being written to, any log. Messages in this list are not
sent to the z OS console, and hardcopy log. As a result using the EXCLude MeSsaGe parameter to exclude
messages is more efficient from a CPU perspective than using z OS mechanisms such as the message
processing facility list and should be used instead where possible. The default value is an empty list.
Message identifiers are supplied without the C S Q prefix, and without the action code suffix (I, D, E, or A). For
example, to exclude message CSQX 500 I, add X500 to this list. This list can contain a maximum of 16
message identifiers.
For some customers the time taken to reverse look up a IP address to a host name can cause issues. MQ has
added a queue manager, REV, DNS attribute can now be used to prevent the use of DNS for host name lookup.
In earlier releases a serviceparm was provided to some customers so that channel hangs could be avoided in
situations where a DNS infrastructure becomes non-responsive.
z E D C compression hardware exploitation is another improvement. When using COMP MSG Z LIB FAST on
channels to message compression, this will exploit the z E D C card if also using the z E C 12 G A 2. This can
yield higher throughputs and reduced CPU cost for SSL channels.
39. 77
WebSphere MQ
IBM Software Group | WebSphere software
MQ platform and product updates
Split Cluster Transmit Queue availability in MQ for z/OS
Integration of AMS capabilities into MQ for z/OS
Currently separately installable products
Support of AMS with IMS Bridge
MFT has less reliance on USS
78
WebSphere MQ
IBM Software Group | WebSphere software
N
O
T
E
S
N
O
T
E
S
As MQ for z OS didn't release a version 7 point 5, there are a few items that we have caught up with.
Split cluster transmit queues
This allow a multiple transmission queues defined on a cluster so different work can be transported over
different channels. This for example, enables different quality of service to be offered depending on the queues
used.
We have also integrated A M S into the product to improve performance and usability. There is a |separately
installable product which enables the A M S feature. With version 8 A M S now supports the IMS bridge.
M F T has been improved to have less reliance on USS.
40. IBM Software Group WebSphere Software
And Finally...
80
WebSphere MQ
IBM Software Group | WebSphere software
N
O
T
E
S
N
O
T
E
S
And Finally
41. 81
WebSphere MQ
IBM Software Group | WebSphere software
82
WebSphere MQ
IBM Software Group | WebSphere software
N
O
T
E
S
N
O
T
E
S
Matt and I have just finished writing a IBM Redbook publication with a small group of authors. The
book is on the new features in version 8. The topics that we have talked about in this presentation are
covered in a greater depth. The book is still in the editing process so we would urge you to look out for
it as it will be a great resource.
42. IBM Software Group WebSphere Software
Questions?
84
WebSphere MQ
IBM Software Group | WebSphere software
N
O
T
E
S
N
O
T
E
S
Now its time for questions?
43. IBM Software Group WebSphere Software
Thank You
86
WebSphere MQ
IBM Software Group | WebSphere software
N
O
T
E
S
N
O
T
E
S
Matt and I would like to thank you for listening and we hope that you have learnt the value of MQ
version 8 to your business.