This document provides information about a presentation on DB2 for z/OS data and index compression given by Willie Favero. It includes disclaimers about the information provided, lists IBM trademarks, and outlines objectives to describe DB2 compression fundamentals, how data and index compression are implemented in DB2, and how to determine if compression achieves expected disk savings. It also references the history of data compression techniques including the Lempel-Ziv algorithms from 1977 that DB2 compression is based on.
This document discusses Android, an open source software stack for mobile devices. It is a complete system comprising an operating system, middleware, and key applications. Android is developed as part of the Open Handset Alliance and is powered by the Linux kernel. It uses Java for application development and includes APIs for graphics, data storage, media, Bluetooth, WiFi and more. Developers write Android apps in Java, which are then compiled to Dalvik bytecode and run on Android devices.
The document discusses the Android application framework. It includes core libraries that provide functionality like media playback, 2D/3D graphics, and SQLite. The Dalvik VM was used before Android 5 but has been replaced by ART, which uses ahead-of-time compilation. The framework includes activities, intents, services, and content providers as important app components. It also handles notifications, audio/video output, and surfaces using managers. Fragments allow dividing an activity's UI.
Congestion Control in Wireless Sensor Networks- An overview of Current TrendsEditor IJCATR
In WSN congestion occurs when traffic load exceeds the capacity available at any point in a network. Congestion
acts an important role in degrading the performance of the network or failure of the network. So it is essential to detect and
control the congestion in the entire WSN. Thus one can improve the performance of the network. Different factors are involved
in the congestion; the main factor is buffer over flow, packet loss, lowers network throughput and energy wastage. To address
this challenge this is essential for a distributed algorithm that mitigate congestion and allocate appropriate source rate to a sink
node for wireless sensor network. This paper gives some ideas how to control and manage the congestion in a wireless sensor
network.
Rover is a system that enables location-based services by tracking user locations. It uses a Rover controller to interact with location services, clients, and content providers. The system architecture includes Rover clients, wireless access points, servers like the location server and media streaming, and a Rover database. It aims to scale to serve large numbers of users across various devices and wireless technologies.
This document discusses Android, an open source software stack for mobile devices. It is a complete system comprising an operating system, middleware, and key applications. Android is developed as part of the Open Handset Alliance and is powered by the Linux kernel. It uses Java for application development and includes APIs for graphics, data storage, media, Bluetooth, WiFi and more. Developers write Android apps in Java, which are then compiled to Dalvik bytecode and run on Android devices.
The document discusses the Android application framework. It includes core libraries that provide functionality like media playback, 2D/3D graphics, and SQLite. The Dalvik VM was used before Android 5 but has been replaced by ART, which uses ahead-of-time compilation. The framework includes activities, intents, services, and content providers as important app components. It also handles notifications, audio/video output, and surfaces using managers. Fragments allow dividing an activity's UI.
Congestion Control in Wireless Sensor Networks- An overview of Current TrendsEditor IJCATR
In WSN congestion occurs when traffic load exceeds the capacity available at any point in a network. Congestion
acts an important role in degrading the performance of the network or failure of the network. So it is essential to detect and
control the congestion in the entire WSN. Thus one can improve the performance of the network. Different factors are involved
in the congestion; the main factor is buffer over flow, packet loss, lowers network throughput and energy wastage. To address
this challenge this is essential for a distributed algorithm that mitigate congestion and allocate appropriate source rate to a sink
node for wireless sensor network. This paper gives some ideas how to control and manage the congestion in a wireless sensor
network.
Rover is a system that enables location-based services by tracking user locations. It uses a Rover controller to interact with location services, clients, and content providers. The system architecture includes Rover clients, wireless access points, servers like the location server and media streaming, and a Rover database. It aims to scale to serve large numbers of users across various devices and wireless technologies.
- Virtualization allows multiple operating systems to run concurrently on a single physical machine by presenting each virtual operating system with a virtual hardware environment. A hypervisor manages access to the physical hardware resources and isolates the virtual machines.
- Cloud computing extends virtualization by allowing virtual servers and other resources to be dynamically provisioned on demand from large shared computing infrastructure. This improves flexibility and allows users to pay only for resources that are consumed.
- The hypervisor software manages the virtual machines and allocates physical resources to each one while isolating them from each other. Example hypervisors include VMware, Xen, and KVM. Virtualization improves hardware utilization and makes infrastructure more flexible and cost-effective.
Mobile Application Development With Androidguest213e237
The document discusses mobile application development for Android. It provides an overview of the Android platform and architecture, including core application components like activities, services, content providers and intents. It also covers the Android software development kit, tools like Eclipse and Android Developer Tools plugin, and the steps to create a basic "Hello World" Android application using the Android SDK.
This document provides an overview of Android internals through a series of topics:
1. It describes key Android concepts like components, intents, and the manifest file.
2. It outlines the overall Android architecture including system startup processes like the bootloader, kernel, init, zygote and system server.
3. It covers various aspects of the Android system like the Linux kernel customizations, native user-space environment, Dalvik VM, and Java Native Interface.
4. It also profiles important system-level components like the system server, activity manager, and Binder IPC mechanism.
Cloud Computing - Technologies and TrendsMarcelo Sávio
This document provides an overview of cloud computing, including definitions of cloud service models (IaaS, PaaS, SaaS), deployment options (private, public, hybrid clouds), characteristics of cloud computing, major factors driving adoption of cloud computing, and trends in cloud adoption among organizations. Key trends discussed include the growth of cloud services, increasing utilization of cloud technologies by enterprises, and different motivations for cloud adoption between IT and business users.
The document provides an overview of the Android platform architecture. It describes Android as an open source mobile operating system led by the Open Handset Alliance. The key components of the Android architecture include the Linux kernel, libraries, Android runtime using the Dalvik virtual machine, framework APIs, and applications. Applications are built using activities, services, content providers and broadcast receivers. The document also discusses Android security using a permission-based model.
Virtualization allows multiple operating systems to run simultaneously on a single physical server using a hypervisor. This reduces costs by improving hardware utilization, lowering maintenance needs, and providing continuous server uptime. There are two main hypervisor types: native hypervisors have direct access to server hardware while hosted hypervisors run within an operating system. Virtualization offers advantages like zero downtime maintenance, dynamic resource allocation, and automated backups.
Virtualization allows multiple operating systems to run on a single physical machine by dividing the machine's resources virtually. It works by applying hardware and software partitioning to create isolated execution environments for each virtual system. There are different types of virtualization functions such as sharing, aggregating, emulating, and insulating virtual resources. While virtualization started on mainframes to improve resource utilization, modern virtualization aims to address challenges like rising infrastructure costs and insufficient disaster protection. Virtualization abstracts computer resources and separates privilege levels through defined interfaces, but this also introduces constraints that virtualization aims to overcome.
The document provides an overview of cloud computing, including definitions, models like SaaS, PaaS and IaaS, and concepts like public and private clouds. It discusses benefits of cloud computing like reduced costs and increased flexibility, as well as challenges around data protection, availability and regulatory compliance. The document also covers virtualization topics such as types of virtualization, virtual machine architecture, and virtual networking and storage components in VMware.
This document discusses challenges related to virtual machine (VM) migration in cloud computing. It provides background on cloud computing and virtual machines. Key issues discussed include automated service provisioning, VM migration for server consolidation and energy management, and security challenges. The document also covers motivation for VM migration when workload increases trigger resource requirement changes. Methods for VM migration discussed include memory, network, and device migration techniques. Performance evaluation results of migration are presented. Migration across data centers introduces additional challenges like increased latency. Proposed solutions discussed encryption for security and redirection approaches to handle increased latency.
Wireless sensor networks consist of hundreds or thousands of sensor nodes that are distributed to monitor various environmental conditions through sensing, processing, and communicating with each other and a base station. These sensor nodes have limitations in terms of power, memory, and processing capabilities compared to other networks. Wireless sensor networks have a wide range of applications including military surveillance, environmental monitoring, smart homes/buildings, and healthcare.
The Android emulator allows developers to test Android applications without using physical devices. It simulates key aspects of an Android device including hardware, software, and various form factors. The emulator runs on the computer and displays an emulated Android device that developers can interact with. It supports running multiple emulated Android devices at once with varying configurations defined through Android Virtual Devices (AVDs). The emulator and AVDs allow easy prototyping and testing of Android applications across different device profiles before releasing to physical hardware.
The document provides an overview of the Android operating system. It discusses that Android is an open source software platform based on the Linux kernel and allows developers to write managed code using Java. It is developed by Google and other companies part of the Open Handset Alliance. The document then describes Android's history and architecture, including its use of the Linux kernel, Binder for inter-process communication, Dalvik virtual machine, core libraries, and application framework. It also covers the application lifecycle and how the Android system starts up.
In these slides you can find the basic concepts of natural user interfaces. From the evolution of the classic desktop centered applications to the more intuitive and natural ones.
Virtualization with KVM (Kernel-based Virtual Machine)Novell
As a technical preview, SUSE Linux Enterprise Server 11 contains KVM, which is the next-generation virtualization software delivered with the Linux kernel. In this technical session we will demonstrate how to set up SUSE Linux Enterprise Server 11 for KVM, install some virtual machines and deal with different storage and networking setups.
To demonstrate live migration we will also show a distributed replicated block device (DRBD) setup and a setup based on iSCSI and OCFS2, which are included in SUSE Linux Enterprise Server 11 and SUSE Linux Enterprise 11 High Availability Extension.
This document provides an introduction to virtualization including:
1) The benefits of virtualization like efficient resource utilization and strong isolation between virtual machines.
2) A brief history of virtualization from the 1960s mainframe era to modern ubiquitous cloud computing.
3) Popular use cases of virtualization including cloud computing, virtual desktop infrastructure, and mobile virtualization.
4) Basic terminologies that distinguish type-1 and type-2 virtual machine monitors as well as full and para-virtualization methods.
What is Virtualization and its types & Techniques.What is hypervisor and its ...Shashi soni
This PPT contains Following Topics-
1.what is virtualization?
2.Examples of virtualization.
3.Techniques of virtualization.
4.Types of virtualization.
5.What is Hipervisor.
6.Types of Hypervisor with Diagrams.
Some set of examples are there like Virtual Box with demo image.
Flash Ahead: IBM Flash System Selling PointCTI Group
IBM's FlashSystem storage is designed to radically accelerate critical applications by providing consistent low latency flash performance. It can integrate with existing disk arrays to offload I/O-intensive workloads while improving overall performance. FlashSystem utilizes IBM's flash technology and software to deliver microsecond response times for applications such as databases, virtual infrastructures, and cloud computing. The FlashSystem family includes the all-flash 710, 720, 810, and 820 models that are optimized for performance, capacity, and mixed workloads.
IBM DB2 Analytics Accelerator Trends & Directions by Namik Hrle Surekha Parekh
IBM DB2 Analytics Accelerator has drawn lots of attention from DB2 for z/OS users. In many respects it presents itself as just another DB2 access path (but what a powerful one!) and its deep integration into DB2 as well as application transparency makes it one of the most exciting DB2 enhancements in years. The IBM DB2 Analytics Accelerator complements DB2 by adding industry leading data intensive complex query performance thanks to being powered by the Netezza engine and enhances DB2 to the ultimate database management system that delivers the best of both worlds: transactional as well as analytical workloads. This presentation brings the latest news from the IDAA development and shows the trends and directions in which this technology develops.
- Virtualization allows multiple operating systems to run concurrently on a single physical machine by presenting each virtual operating system with a virtual hardware environment. A hypervisor manages access to the physical hardware resources and isolates the virtual machines.
- Cloud computing extends virtualization by allowing virtual servers and other resources to be dynamically provisioned on demand from large shared computing infrastructure. This improves flexibility and allows users to pay only for resources that are consumed.
- The hypervisor software manages the virtual machines and allocates physical resources to each one while isolating them from each other. Example hypervisors include VMware, Xen, and KVM. Virtualization improves hardware utilization and makes infrastructure more flexible and cost-effective.
Mobile Application Development With Androidguest213e237
The document discusses mobile application development for Android. It provides an overview of the Android platform and architecture, including core application components like activities, services, content providers and intents. It also covers the Android software development kit, tools like Eclipse and Android Developer Tools plugin, and the steps to create a basic "Hello World" Android application using the Android SDK.
This document provides an overview of Android internals through a series of topics:
1. It describes key Android concepts like components, intents, and the manifest file.
2. It outlines the overall Android architecture including system startup processes like the bootloader, kernel, init, zygote and system server.
3. It covers various aspects of the Android system like the Linux kernel customizations, native user-space environment, Dalvik VM, and Java Native Interface.
4. It also profiles important system-level components like the system server, activity manager, and Binder IPC mechanism.
Cloud Computing - Technologies and TrendsMarcelo Sávio
This document provides an overview of cloud computing, including definitions of cloud service models (IaaS, PaaS, SaaS), deployment options (private, public, hybrid clouds), characteristics of cloud computing, major factors driving adoption of cloud computing, and trends in cloud adoption among organizations. Key trends discussed include the growth of cloud services, increasing utilization of cloud technologies by enterprises, and different motivations for cloud adoption between IT and business users.
The document provides an overview of the Android platform architecture. It describes Android as an open source mobile operating system led by the Open Handset Alliance. The key components of the Android architecture include the Linux kernel, libraries, Android runtime using the Dalvik virtual machine, framework APIs, and applications. Applications are built using activities, services, content providers and broadcast receivers. The document also discusses Android security using a permission-based model.
Virtualization allows multiple operating systems to run simultaneously on a single physical server using a hypervisor. This reduces costs by improving hardware utilization, lowering maintenance needs, and providing continuous server uptime. There are two main hypervisor types: native hypervisors have direct access to server hardware while hosted hypervisors run within an operating system. Virtualization offers advantages like zero downtime maintenance, dynamic resource allocation, and automated backups.
Virtualization allows multiple operating systems to run on a single physical machine by dividing the machine's resources virtually. It works by applying hardware and software partitioning to create isolated execution environments for each virtual system. There are different types of virtualization functions such as sharing, aggregating, emulating, and insulating virtual resources. While virtualization started on mainframes to improve resource utilization, modern virtualization aims to address challenges like rising infrastructure costs and insufficient disaster protection. Virtualization abstracts computer resources and separates privilege levels through defined interfaces, but this also introduces constraints that virtualization aims to overcome.
The document provides an overview of cloud computing, including definitions, models like SaaS, PaaS and IaaS, and concepts like public and private clouds. It discusses benefits of cloud computing like reduced costs and increased flexibility, as well as challenges around data protection, availability and regulatory compliance. The document also covers virtualization topics such as types of virtualization, virtual machine architecture, and virtual networking and storage components in VMware.
This document discusses challenges related to virtual machine (VM) migration in cloud computing. It provides background on cloud computing and virtual machines. Key issues discussed include automated service provisioning, VM migration for server consolidation and energy management, and security challenges. The document also covers motivation for VM migration when workload increases trigger resource requirement changes. Methods for VM migration discussed include memory, network, and device migration techniques. Performance evaluation results of migration are presented. Migration across data centers introduces additional challenges like increased latency. Proposed solutions discussed encryption for security and redirection approaches to handle increased latency.
Wireless sensor networks consist of hundreds or thousands of sensor nodes that are distributed to monitor various environmental conditions through sensing, processing, and communicating with each other and a base station. These sensor nodes have limitations in terms of power, memory, and processing capabilities compared to other networks. Wireless sensor networks have a wide range of applications including military surveillance, environmental monitoring, smart homes/buildings, and healthcare.
The Android emulator allows developers to test Android applications without using physical devices. It simulates key aspects of an Android device including hardware, software, and various form factors. The emulator runs on the computer and displays an emulated Android device that developers can interact with. It supports running multiple emulated Android devices at once with varying configurations defined through Android Virtual Devices (AVDs). The emulator and AVDs allow easy prototyping and testing of Android applications across different device profiles before releasing to physical hardware.
The document provides an overview of the Android operating system. It discusses that Android is an open source software platform based on the Linux kernel and allows developers to write managed code using Java. It is developed by Google and other companies part of the Open Handset Alliance. The document then describes Android's history and architecture, including its use of the Linux kernel, Binder for inter-process communication, Dalvik virtual machine, core libraries, and application framework. It also covers the application lifecycle and how the Android system starts up.
In these slides you can find the basic concepts of natural user interfaces. From the evolution of the classic desktop centered applications to the more intuitive and natural ones.
Virtualization with KVM (Kernel-based Virtual Machine)Novell
As a technical preview, SUSE Linux Enterprise Server 11 contains KVM, which is the next-generation virtualization software delivered with the Linux kernel. In this technical session we will demonstrate how to set up SUSE Linux Enterprise Server 11 for KVM, install some virtual machines and deal with different storage and networking setups.
To demonstrate live migration we will also show a distributed replicated block device (DRBD) setup and a setup based on iSCSI and OCFS2, which are included in SUSE Linux Enterprise Server 11 and SUSE Linux Enterprise 11 High Availability Extension.
This document provides an introduction to virtualization including:
1) The benefits of virtualization like efficient resource utilization and strong isolation between virtual machines.
2) A brief history of virtualization from the 1960s mainframe era to modern ubiquitous cloud computing.
3) Popular use cases of virtualization including cloud computing, virtual desktop infrastructure, and mobile virtualization.
4) Basic terminologies that distinguish type-1 and type-2 virtual machine monitors as well as full and para-virtualization methods.
What is Virtualization and its types & Techniques.What is hypervisor and its ...Shashi soni
This PPT contains Following Topics-
1.what is virtualization?
2.Examples of virtualization.
3.Techniques of virtualization.
4.Types of virtualization.
5.What is Hipervisor.
6.Types of Hypervisor with Diagrams.
Some set of examples are there like Virtual Box with demo image.
Flash Ahead: IBM Flash System Selling PointCTI Group
IBM's FlashSystem storage is designed to radically accelerate critical applications by providing consistent low latency flash performance. It can integrate with existing disk arrays to offload I/O-intensive workloads while improving overall performance. FlashSystem utilizes IBM's flash technology and software to deliver microsecond response times for applications such as databases, virtual infrastructures, and cloud computing. The FlashSystem family includes the all-flash 710, 720, 810, and 820 models that are optimized for performance, capacity, and mixed workloads.
IBM DB2 Analytics Accelerator Trends & Directions by Namik Hrle Surekha Parekh
IBM DB2 Analytics Accelerator has drawn lots of attention from DB2 for z/OS users. In many respects it presents itself as just another DB2 access path (but what a powerful one!) and its deep integration into DB2 as well as application transparency makes it one of the most exciting DB2 enhancements in years. The IBM DB2 Analytics Accelerator complements DB2 by adding industry leading data intensive complex query performance thanks to being powered by the Netezza engine and enhances DB2 to the ultimate database management system that delivers the best of both worlds: transactional as well as analytical workloads. This presentation brings the latest news from the IDAA development and shows the trends and directions in which this technology develops.
IBM Analytics Accelerator Trends & Directions Namk Hrle Surekha Parekh
IBM DB2 Analytics Accelerator has drawn lots of attention from DB2 for z/OS users. In many respects it presents itself as just another DB2 access path (but what a powerful one!) and its deep integration into DB2 as well as application transparency makes it one of the most exciting DB2 enhancements in years. The IBM DB2 Analytics Accelerator complements DB2 by adding industry leading data intensive complex query performance thanks to being powered by the Netezza engine and enhances DB2 to the ultimate database management system that delivers the best of both worlds: transactional as well as analytical workloads. This presentation brings the latest news from the IDAA development and shows the trends and directions in which this technology develops.
Best Practice of Compression/Decompression Codes in Apache Spark with Sophia...Databricks
Nowadays, people are creating, sharing and storing data at a faster pace than ever before, effective data compression / decompression could significantly reduce the cost of data usage. Apache Spark is a general distributed computing engine for big data analytics, and it has large amount of data storing and shuffling across cluster in runtime, the data compression/decompression codecs can impact the end to end application performance in many ways.
However, there’s a trade-off between the storage size and compression/decompression throughput (CPU computation). Balancing the data compress speed and ratio is a very interesting topic, particularly while both software algorithms and the CPU instruction set keep evolving. Apache Spark provides a very flexible compression codecs interface with default implementations like GZip, Snappy, LZ4, ZSTD etc. and Intel Big Data Technologies team also implemented more codecs based on latest Intel platform like ISA-L(igzip), LZ4-IPP, Zlib-IPP and ZSTD for Apache Spark; in this session, we’d like to compare the characteristics of those algorithms and implementations, by running different micro workloads as well as end to end workloads, based on different generations of Intel x86 platform and disk.
It’s supposedly to be the best practice for big data software engineers to choose the proper compression/decompression codecs for their applications, and we also will present the methodologies of measuring and tuning the performance bottlenecks for typical Apache Spark workloads.
Lakefield is Intel's new hybrid core architecture that enables new thin and compact mobile devices. It utilizes 3D Foveros packaging to stack multiple dies, including compute and base dies, into a small 12x12mm package. This new architecture provides significant improvements over previous generations, including around 10x lower standby power, 50% better graphics performance, and smaller core and PCB areas, all while maintaining high performance in a lower total power design point ideal for mobile form factors. Lakefield is slated for production in late 2019.
z/OS 2.2 includes several new performance enhancements for private cloud, mobile, and analytics workloads including support for larger memory and increased logical processors on IBM z13 servers. It also features improvements to workload management, availability, and capacity to better support these new workloads. DFSMS, USS, and other subsystems receive updates to optimize storage, I/O, and threading to improve performance. Overall, z/OS 2.2 aims to provide enterprises with an optimized mainframe platform for emerging workload patterns.
High End Modeling & Imaging with Intel Iris Pro GraphicsIntel® Software
This document summarizes benchmark testing of 3D modeling and design software performance on systems with different GPUs, including Intel Iris Pro graphics. The Intel Iris Pro graphics provided performance on par with higher-end discrete GPUs for a complex 3D model with over 3,000 parts in Autodesk Fusion 360. Iris Pro graphics offer fast performance for 3D workloads while being less expensive than discrete graphics cards. The document also discusses options for deploying 3D modeling software virtually using either NVIDIA GRID or Intel Iris Pro graphics, concluding there are now multiple options available to choose the best solution based on workflow and business needs.
This document provides an overview of IBM's Hadoop solution on Power Systems, including:
- The basic architecture of IBM's Hadoop solution using Power Systems servers and GPFS storage.
- Considerations for sizing a Hadoop cluster, such as compression rates and space for shuffle/sort data.
- The IBM Solution for Hadoop POWER System edition and IBM Data Engine for Analytics solutions.
- Networking recommendations for Hadoop clusters including appropriate switches and cabling.
[Café techno] - Ibm power7 - Les dernières annoncesGroupe D.FI
Annonces de la nouvelle technologie IBM Power7+
Depuis plus de 10 ans, les entreprises privilégient la technologie Power pour AIX, IBM i et Linux. Aujourd'hui, IBM élargit le leadership de ses plateformes Power en introduisant une évolution technologique avec l'architecture Power 7+.
D.FI vous invite à découvrir les nouvelles fonctionnalités Power 7+ qui peuvent vous aider à répondre aux exigences de votre informatique en vous offrant dynamiquement une plus grande efficacité, des fonctions d'analyse métier et centraliser les charges de travail.
Nouveautés :
Le Power 7+ est une puce octocoeurs gravée en 32 nm (contre 45 mn pour Power7) :
- Atouts : La fréquence d'horloge est dopée ;
- La taille du cache eDRAM est multipliée par 2,5.
De nouvelles fonctions : assistance matérielle à la compression de mémoire AME ("Active Memory Expansion“) et accélération cryptographiques.
Une évolution majeure permet désormais de créer jusqu'à 20 micro-partitions par coeurs Power7+
The document discusses how Pure Storage's FlashBlade storage system is designed to be a data hub that can power various modern data and analytics workloads including AI, machine learning, data warehousing, and streaming analytics. It provides high throughput, scale-out performance for file and object storage, and is purpose built to deliver the performance needed for these next generation workloads. FlashBlade uses a scale-out architecture with blades, networking, and software that allows for linear scaling of performance and capacity.
This document provides an overview of a training session on storage and the Data Facility Storage Management Subsystem (DFSMS) for z/OS. The training will cover z/OS storage fundamentals, storage systems for z/OS including disk drives, tape drives, and the IBM DS8000 family of storage systems. It will also cover the DFSMS software which manages storage hierarchies and the movement of data between online, nearline, and offline storage devices. Attendees must complete 9 of the 12 listed lectures and all required lab exercises to earn a certificate.
The Supermicro X12 product line, powered by 3rd Gen Intel® Xeon® Scalable processors, contains many innovations that gives organizations more performance for a variety of workloads.
Join this webinar to learn more about the outstanding performance you can get by using Supermicro X12 servers and storage systems using the latest technologies from Intel®.
Watch the webinar: https://www.brighttalk.com/webcast/17278/514618
This document discusses EMC Isilon scale-out NAS storage solutions. It provides an overview of EMC Isilon's market leadership in scale-out NAS, key trends in unstructured data growth, and how Isilon addresses next-generation workloads. The document also outlines Isilon's hardware and software features like its OneFS operating system, data protection and management tools, and product family which scales from high transactional to high density platforms.
Learn about the latest generation of IBM System z servers, the IBM System z9 Enterprise Class (z9 EC, formerly the IBM System z9 109 (z9-109)) and the IBM System z9 Business Class (z9 BC), are designed to provide an advanced combination of reliability, availability, security, scalability, and virtualization features. The good news is all supported z/OS releases can run on a z9 EC or z9 BC server (all supported z/OS.e releases can run on a z9 BC server). Similarly, all supported z/OS and z/OS.e releases can participate in a sysplex that has a CF or operating system image on a z9 server. The even better news is that most customers are well positioned to use the new server. For more information on IBM System z, visit http://ibm.co/PNo9Cb.
Visit the official Scribd Channel of IBM India Smarter Computing at http://bit.ly/VwO86R to get access to more documents.
IBM FlashSystem and other SSD's are being adopted for OLTP and Analytics applications. Fast 16Gb Flash storage requires a reliable, high performance network to ensure applications can utilize it effectively. Learn how to plan for a highspeed reliable network to handle the increased demands while delivering reliable application response times. Understand the reliability, performance, and simplified management features of Gen5 FC and Fabric Vision. Be prepared for the next jump in SAN's.
The document announces the launch of the new IBM zEnterprise BC12 (zBC12) server. Some key highlights include a 36% increase in processing capacity through 6 central processors running at 4.2GHz each, 512GB of memory, and system I/O bandwidth of 128GB/sec. The zBC12 continues IBM's heritage of CMOS mainframe technology and supports new capabilities like zEDC Express for data compression, 10GbE RoCE Express for high-speed networking, and OSA Express 5S for upgraded I/O adapters.
Edge Computing and 5G - SDN/NFV London meetupHaidee McMahon
Edge computing and 5G will enable one compute platform from edge to cloud. This will provide virtual, software-defined, and cloud-ready capabilities to support the 5G future. Key applications that will benefit include gaming/VR with high bandwidth and low latency requirements, as well as industrial and public safety uses involving real-time video analytics, surveillance, and facial recognition. Edge computing deployments will optimize performance for applications with strict latency constraints by placing computing resources closer to endpoints and users.
The document provides information to help understand the differences between Windows XP Home and Professional editions. Key differences include:
- XP Pro includes features like backup software, dynamic disks, IIS, and encrypted file system that XP Home does not have.
- XP Pro supports up to two processors while XP Home only supports one.
- XP Pro allows systems to be domain members and supports group policies, while XP Home does not.
- Only XP Pro supports upgrades from Windows 2000/NT and will have a 64-bit version for Itanium systems.
The document provides an overview of social networking and discusses whether it is right for individuals, professionals, and companies. It discusses popular social networks like Twitter and LinkedIn, and covers topics like establishing an online presence, networking, and guidelines for appropriate social media use. The presentation aims to help attendees understand how to get started with social networking and determine how it can benefit them personally and professionally.
First of 3 presentations on social networking. All three are all very similar. However, each has a slightly different approach to explaining social networking.
One of three variations of social networking / social media presentations. All three are all very similar. However, each has a slightly different approach to explaining social networking.
A First Look at the DB2 10 DSNZPARM ChangesWillie Favero
This document discusses changes to DB2 subsystem parameter module (DSNZPARM) in DB2 10. It provides information on DSNZPARM macros, how parameters can be changed through installation panels or dynamically using -SET SYSPARM command, and differences between hidden, opaque and visible parameters. The document also introduces new documentation for opaque parameters and explains how to display current DSNZPARM settings using sample program DSN8ED7.
DB2 10 Universal Table Space - 2012-03-18 - no templateWillie Favero
DB2 introduced universal table spaces in version 9 to address the need for a table space type that provides both partitioned and segmented organization. Universal table spaces allow tables to be larger than 64GB, provide inter-partition parallelism, and support fast insert and delete operations while avoiding the overhead of partitioning by a ROWID column.
Tips from my personal experience at preparing an abstract for a conference, preparing a presentation for a conference, and most importantly, how you speak (or present) at a conference. In fact most of this is on the speaking portion of the experience.
An Intro to Tuning Your SQL on DB2 for z/OSWillie Favero
This document provides an introduction to SQL tuning for a DB2 for z/OS environment. It was presented on March 1, 2011 by Willie Favero from IBM's Data Warehouse on System z Swat Team. The presentation covers various techniques for optimizing SQL queries and access paths in DB2 for z/OS, with the goal of improving query performance. It addresses topics such as monitoring wait times, buffer pool usage, checkpointing, WLM policies, sort techniques, and disk I/O optimization. The overall aim is to help database administrators understand how to analyze and "tune" queries to reduce response times and meet business performance objectives.
Why computers are cool high school audience - 2010-12-01Willie Favero
The document discusses career opportunities in computer science and technology. It notes that computer science degrees can lead to a variety of jobs at companies like IBM, including opportunities in hardware, software, services, and sales/marketing. It also highlights the growing demand for technology professionals and lists some local colleges that offer computer science programs to prepare students for these in-demand careers.
This document outlines a joint effort between IBM's Poughkeepsie Lab and Silicon Valley Lab to benchmark a 50TB data warehouse on System z and establish best practices for managing large data warehouses on the System z platform. It discusses using workload manager to handle mixed transactional and analytic workloads, implementation considerations, and references several other IBM Redbooks publications related to enterprise data warehousing with DB2 on System z.
Parallelism was first introduced to DB2 way back in 1993 with DB2 Version 3. With every release of DB2 parallelism has been enhanced. In applications like data warehousing and business intelligence, it's almost a necessity. Yet, a surprisingly large number of customers continue to avoid parallelism when it could offer significant elapsed time improvements. With this presentation, we'll try to debunk the myths that surround using DB2 parallelism. We will discuss a little bit of parallelism's history, how parallelism works, the parameters in DB2 that control parallelism and the affects of the value chosen for those parameters, and how to get the greatest benefits when parallelism is put to use; the DOS & DON'TS necessary for you to get the most out of DB2's parallelism. We will also discuss the latest enhancements to parallelism and how parallelism can take advantage of zIIP specialty engines.
The IBM Smart Analytics Optimizer works by offloading CPU-intensive query processing from DB2 for z/OS to specialized hardware. It defines logical data marts containing related tables and loads them into compressed, memory-resident formats on the accelerator. This provides an order of magnitude performance improvement for queries involving the accelerated tables. The optimizer is transparent to applications and preserves DB2's qualities of service while improving price/performance.
Digital Marketing Trends in 2024 | Guide for Staying AheadWask
https://www.wask.co/ebooks/digital-marketing-trends-in-2024
Feeling lost in the digital marketing whirlwind of 2024? Technology is changing, consumer habits are evolving, and staying ahead of the curve feels like a never-ending pursuit. This e-book is your compass. Dive into actionable insights to handle the complexities of modern marketing. From hyper-personalization to the power of user-generated content, learn how to build long-term relationships with your audience and unlock the secrets to success in the ever-shifting digital landscape.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
Project Management Semester Long Project - Acuityjpupo2018
Acuity is an innovative learning app designed to transform the way you engage with knowledge. Powered by AI technology, Acuity takes complex topics and distills them into concise, interactive summaries that are easy to read & understand. Whether you're exploring the depths of quantum mechanics or seeking insight into historical events, Acuity provides the key information you need without the burden of lengthy texts.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
Webinar: Designing a schema for a Data WarehouseFederico Razzoli
Are you new to data warehouses (DWH)? Do you need to check whether your data warehouse follows the best practices for a good design? In both cases, this webinar is for you.
A data warehouse is a central relational database that contains all measurements about a business or an organisation. This data comes from a variety of heterogeneous data sources, which includes databases of any type that back the applications used by the company, data files exported by some applications, or APIs provided by internal or external services.
But designing a data warehouse correctly is a hard task, which requires gathering information about the business processes that need to be analysed in the first place. These processes must be translated into so-called star schemas, which means, denormalised databases where each table represents a dimension or facts.
We will discuss these topics:
- How to gather information about a business;
- Understanding dictionaries and how to identify business entities;
- Dimensions and facts;
- Setting a table granularity;
- Types of facts;
- Types of dimensions;
- Snowflakes and how to avoid them;
- Expanding existing dimensions and facts.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
OpenID AuthZEN Interop Read Out - AuthorizationDavid Brossard
During Identiverse 2024 and EIC 2024, members of the OpenID AuthZEN WG got together and demoed their authorization endpoints conforming to the AuthZEN API
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Monitoring and Managing Anomaly Detection on OpenShift.pdf
Compression for DB2 for z/OS
1. DB2 for z/OS Data and Index Compression
Willie Favero
Senior Certified IT Software Specialist, IBM
wfavero@us.ibm.com
Session Number TDZ-1064B
October 25–29, 2009 • Mandalay Bay • Las Vegas, Nevada
1
3. Data Management Communities for DB2
Data Management Community – share and interact with peers
around the world
– www.ibm.com/software/data/management/community.html
Information Champions – recognizes individuals who have made the
most outstanding contributions to the Information Management
community
– www.ibm.com/software/data/champion
3
4. InfoSphere Communities
InfoSphere On-line Community – share and interact with
peers around the world
– www.ibm.com/community/infosphere
Information Champions – recognizes individuals who have
made the most outstanding contributions to the Information
Management community
– www.ibm.com/software/data/champion
4
5. Business Intelligence & Performance Management
Communities
CIUG – Cognos International Users Groups - a worldwide
organization for users of IBM Cognos solutions
– Membership is FREE – join today!
www.ibm.com/software/data/cognos/usergroups
Information Champions – recognizes individuals who have made the
most outstanding contributions to the Information Management
community
– www.ibm.com/software/data/champion
5
6. Thank You!
Your Feedback is Important to Us
Please complete the survey for this session by:
– Accessing the SmartSite on your smart phone or computer at:
www.iodsmartsite.com
• Surveys / My Session Evaluations
– Visiting any onsite event kiosk
• Surveys / My Session Evaluations
– Each completed survey increases your chance to win an Apple
iPod Touch with daily drawling sponsored by Alliance Tech
6
7. Data Warehouse / Business Intelligence Sessions
1837 Introducing IBM InfoSphere Warehouse on Beth Hamel Mon, Oct 26
IBM System z 11:30AM
1328 Data Warehousing on IBM z/OS: Customer Kevin Tue, Oct 27
Experiences at Univar USA Campbell 3:00PM
1322 IBM DB2 for z/OS Data Warehouse Robert Wed, Oct 28
Performance Catterall 11:00AM
1474 Data Warehousing and Business Gary Crupi, Thu, Oct 29
Intelligence with IBM DB2 for z/OS Willie Favero 12:30 PM
2591 Resource Management of Mixed Data Nin Lei Thu, Oct 29
Warehouse Workloads on System z 11:00 AM
7
8. Sessions Dedicated to IBM Smart Analytics Optimizer
3315 IBM Smart Analytics Optimizer Jan Klockow Mon, Oct 26
Transforming the Way You Do BI 2:45PM - 3:45PM
2037 Introducing IBM Smart Analytics Optimizer Namik Hrle Tue, Oct 27
DB2 Performance Revolution 1:45PM – 2:45PM
2711 IBM Smart Analytics Optimizer Oliver Draese Wed, Oct 28
Architecture and Overview 4:45PM - 5:45PM
2971 IBM Smart Analytics Optimizer Guy Lohman Thu, Oct 29
Not Your Father's Database System! 8:15AM - 9:15AM
Usability IBM Smart Analytics Optimizer Mathias Mon through Thu
Sandbox Accelerating BI Queries on IBM DB2 for Zapke
z/OS
IBM Smart Analytics Optimizer is also mentioned in numerous sessions on
InfoSphere Warehouse on System z and DB2 for z/OS new features
8
9. Objectives
• Describe compression fundamentals
• Explain how DB2 implements data compression
• Describe how a dictionary is created and how data
compression uses it
• Describe how DB2 implements index compression
• Determine if using data and/or index compression
accomplishes the disk savings you were
anticipating.
12. The Setup
• First compression:
• EDITPROC
• High CPU overhead
• HUFFMAN sample (DSN8HUFF in
hlq.SDSNSAMP)
• Hardware compression
• DB2 hardware compression
• ESA Compression (Dec 1993)
• DB2 Version 3
• Hardware helps reduce CPU overhead
• CMPSC instruction
13. And It Just Keeps Getting Faster
The faster the chip speed,
the better compression performs
z10
4.4 GHz
.58
z9
1.7 GHz
Chip Speeds
.83
z990
(nanoseconds)
1.09 1.2 GHz
1.3
z900
1.6 770 MHz
z800
1.57
G6
(9672)
550 MHz
1.8
Chart NOT to any kind of scale
Compression introduction
On 9672 or a 711- or 511-based ES/9000
New Processors Introduced
14. Just Some Interesting History
DB2
Startup
Is Soft 0C1 and
Hardware No Record in
Present? SYS1.LOGREC
Yes
Use Use
Hardware Software
Compression Compression
Compression automatically comes with hardware today.
15. The Basics
• In 1977 two information theorists, Abraham Lempel and
Jacob Ziv, developed lossless data compression techniques
• LZ77 (LZ1) and LZ78 (LZ2)
• still very popular and widely used today
• LZ stands for Lempel-Ziv (some believe it should be Ziv-
Lempel)
• 77 & 78 - years their lossless compression algorithm was
developed and improved
• LZ77 is an adaptive dictionary based compression algorithm
that works off a window of data using the data just read to
compress the next data in the buffer.
• LZ78 variation is based on all of the data available rather
than just a limited amount
• LZW (Lempel-Ziv-Welch) variation was created to improve
the speed of implementation, not usually considered optimal
16. The Basics
• "lossless compression“ – expanding compressed data gives
you the exact same thing you started with
• "lossy compression" loses some information every time you
compress it
• JPG (JPEG) is a form of lossy compression
• Are you familiar with the Lempel-Ziv algorithm?
• GIF
• TIFF
• PDF (Adobe Acrobat)
• ARC, PKZIP, COMPRESS and COMPACT on the UNIX
platform
• StuffIt for the Mac folks
• all use the Lempel-Ziv algorithm and some form of LZ
compression
18. Setting Compression On
CREATE … TABLESPACE tablespace_name
………………
COMPRESS NO
Compression available by partition.
COMPRESS YES
ALTER … TABLESPACE tablespace_name
………………
COMPRESS YES
Rows are not compressed until LOAD
COMPRESS NO or REORG is run.
19. The Dictionary
• The dictionary is created by the LOAD and/or REORG utilities only
• It occupies:
• 4K – 16 pages
• 8K - 8 pages
• 16K - 4 pages
• 32K - 2 pages
• The compression dictionary follows the header and first space map
pages
• Dictionaries can be at the partition level (Careful, you could have
4096 partitions)
• Not all rows in a table spaces can be compressed. If the row after
compression is not shorter than the original uncompressed row, the
row remains uncompressed.
Compression dictionary size
• 64K (16 X 4K page) of storage in the DBM1 address space
• Dictionary goes above the bar in DB2 Version 8 and later
releases
20. Dictionary
• Rows are compressed on INSERT
• For an UPDATE
• Expand, update, then re-compress row
• UPDATE has the potential to be expensive
• Changes (INSERT & UPDATE) are logged in compressed
format
• Larger page sizes may result in better compression.
• Resulting rows after compression are variable length
• You might be able to fit more rows with less wasted space
in a larger page size.
• You cannot turn compression on for the catalog, directory,
work files, or LOB table spaces
• Index compression does not use a dictionary
21. Dictionary – 4K Page Size
Data pages (18-???)
Dictionary pages (2-17)
Space map page (1)
Header page (0)
22. Dictionary – Not Always a Good Thing
DB2 V8 and above Is problem resolved?
64K
Make sure virtual is
Compression backed by 100% real
Dictionary
2GB
DB2 V3-V7
64K
Could be a
potential problem if
DBM1 address space is Compression
storage constrained Dictionary
16 pages x 4096 bytes/page = 64KB
23. When to Use Compression
Tables with small rows
255 row/page max
gs
iin gs
av n
s av
no s
,, no
ws
o ws
rro
e iin bytes/page available
n
as e 4054
e as
crre
iin c 15 byte rows = 3825 bytes total
n
o 255
No
N
3825 bytes / 80% compression = ????
Doesn’t matter, still have 255 row limit
24. When to Use Compression
Tables with very large rows
255 row/page max
gs
n gs
viin
sa v
no sa
,, no
rro
o ws
ws
iin
as e n
e 4054 bytes/page available
e as
crre
iin c
n
No
No
4000 byte row at 45% compression
2200 bytes
2 rows/page: 2200*2 = 4400
4400 bytes > 4054 bytes
25. No Increase in Rows, No Savings
• If number of rows does not increase, do not use
compression
• For short rows, no help
• For long rows, consider moving to a larger page
size 8K, 16K, or even 32K
26. When to Use Compression
• Encryption
• Little gain (if any) from compressing encrypted
data
• No repetitive characters
• Option:
• DSN1COMP has new option EXTNDICT
• For encryption tool to support encryption and
compression combined
• Requires PTFs UK41354 - V8 and UK41355 – V9
27. Logging
• UPDATEs and INSERTs log in compressed format
• Possible results:
• reduced logging
• reduced log I/O
• Any active log reduction would carry over to the
archive logs
• However, UPDATEs need to expand and compress
to complete
28. Possible Performance Gain
• When compression is on, data pages are brought
into buffer pool in compressed state
• Increasing the number of rows in the same size
pool could increase buffer pool hit ratio
• Increasing hit ratio could reduce I/O necessary to
satisfy the same number of getpage requests.
• If compression doubles the number of rows per
page
• When DB2 loads that page in a buffer pool, it will
be loading twice as many rows
• Less I/O is always a good thing
30. LOAD & REORG
• The critical part of data compression is building the
dictionary
• The better the dictionary reflects your data, the
higher your compression rates are going to be
• There are two choices for building your dictionary
• LOAD utility
• REORG utility
• These two utilities are the only mechanism available
to create a dictionary
31. LOAD Utility
• LOAD utility uses the first “x” number of rows
• There are no rows compressed while the LOAD
utility is building the dictionary
• Once dictionary is created, the remaining rows
being loaded will be considered for compression
• With the dictionary is in place, any rows inserted
(SQL INSERT) will be compressed assuming the
compressed row is shorter than the original
uncompressed row
32. DB2 9 Enhancement
• LOAD … COPYDICTIONARY
• Allow priming of a partition with a compression
dictionary
• Can only be used with INTO TABLE PART
• REPLACE dictionary example:
• LOAD RESUME NO
• COPYDICTIONARY 3
• INTO TABLE PART 4 REPLACE
• INTO TABLE PART 5 REPLACE
• Requires APARs PK63324 and PK63325
33. REORG Utility
• REORG utility should be your first choice
• It builds a better dictionary
• REORG sees all of the rows because the dictionary is built
during its UNLOAD phase
• REORG can create a more accurate, and therefore more
efficient, dictionary than the LOAD utility
• The more information used to create the dictionary, the
better compression should be
• REORG will compress all of the rows in the table space
during the RELOAD phase because the dictionary is now
available
• Any row inserted after the dictionary is built will be
compressed assuming the compressed row is shorter than
the original row
34. Creating a Dictionary
• Creating dictionary could potentially be CPU
intensive
• If dictionary works for you, reuse it
• Don’t pay the expense to rebuild it
• KEEPDICTIONARY
• REORG and LOAD REPLACE use this utility
keyword to suppress building a new dictionary
• Not specifying KEEPDICTIONARY could make
REORG/LOAD more costly to run and increase
elapsed time
35. DB2 9 Caution
• REORG or LOAD REPLACE will migrate pre-V9
table spaces to reordered row format
• Careful if using compression
• Dictionaries automatically converted
• Even if KEEPDICTIONARY is specified
(APAR PK41156)
• New DSNZPARM keyword available on DSN6SPRM
macro - HONOR_KEEPDICTIONARY
36. Avoid Converting Existing Compressed
Table Spaces
• PK78958 (Hiper – closed March 30, 2009)
• REORG and LOAD REPLACE will not convert an existing
compressed table space to RRF when migrating to DB2 9
NFM
• Reasons are…
• Possible work-around (from Willie not IBM)
• Turn compression OFF for the table space
• Run REORG (or LOAD REPLACE) against the existing
table space migrating it to RRF
• Turn compression back ON for the table space
• Rerun REORG to rebuild the dictionary and compress all
of the rows in the table space.
39. DB2 Index Compression…..
• Index compression is new to DB2 9 for z/OS
• Page level compression
• Unlike data row compression:
• Buffers contain expanded pages
• Pages are decompressed when read from disk
• Prefetch performs the decompression asynchronously
• A buffer hit does not need to decompress
• Pages are compressed by the deferred write engine
• Like data row compression:
• An I/O bound scan will run faster
• DSN1COMP utility can be used to predict space savings
Index compression saves space, it’s not for performance
40. CREATE/ALTER INDEX Compression
CREATE … INDEX index_name
………………
COMPRESS NO
COMPRESS YES
ALTER … INDEX index_name
………………
COMPRESS NO
Advisory REORG-pending state until
COMPRESS YES REORG INDEX, REBUILD INDEX, or
REORG TABLESPACE is run.
41. Index Compression: Performance
• CPU cost is mostly inconsequential. Most of the
cost is asynchronous, the exception being a
synchronous read. The worst case is an index with
a poor buffer hit ratio.
Example: Suppose the index would compress 3-to-1.
You have three options…..
1. Use 8K buffer pool. Save 50% of disk. No change in buffer hit ratio
or real storage usage.
2. Use 16K buffer pool and increase the buffer pool size by 33%. Save
67% of disk, increase real storage usage by 33%.
3. Use 16K buffer pool, with no change in buffer pool size. Save 67%
of disk, no change in real storage used, decrease in buffer hit ratio,
with a corresponding increase in synchronous CPU time.
42. Index Compression Example:
Suppose an index could compress 3-to-1
Decompressed 16K buffer
Decompressed 8K buffer
Compressed 4K
Compressed 4K
12K used
4K unused
8K Buffer pool 16K Buffer pool
50% disk space reduction 67% disk space reduction
No increase in virtual storage 33% increase in virtual storage cost
cost
43. …..DB2 Index Compression
• The CI Size of a compressed index on disk is
always 4K
• A 4K expands into a 8K or 16K buffer, which is the
DBA’s choice. This choice determines the
maximum compression ratio.
• Compression of key prefix and RID Lists
• A Rid List describes all of the rows for a
particular index key
• An index with a high level of non-uniqueness,
producing long Rid Lists, achieves about 1.4-to-1
compression
• Compression of unique keys depends on prefix
commonality
44. >4K Page Size for Indexes
• V9 supports 4K, 8K, 16K and 32K page sizes for
indexes
• A large page size is very good for reducing the
frequency of CI splits, which is costly in data
sharing environment.
• The downside: As with large pages for table
spaces, the buffer hit ratio could degrade.
45. Non-padded indexes
• Non-padded indexes were introduced in DB2 V8
• Useful when an index contains one or more
VARCHAR columns
• Facilitates index only access
• Saves DASD space
• The CPU cost is significant. It grows as a function
of the number of VARCHAR columns.
• Index compression and non-padded indexes are
complementary
47. Catalog Information of Interest
Catalog table Column Description
SYSINDEXES COMPRESS Compression is in use (Y) or
not in use (N)
SYSTABLEPART COMPRESS “Y” compression in use
“blank” compression not used
Can be at partition level
SYSTABLEPART *PAGESAVE % pages saved
0 no savings
+ is an increase
Includes overhead bytes
SYSTABLEPART AVGROWLEN Average row length with or
without compression
SYSTABLES PCTROWCOMP % rows compressed within
total rows active
*There are other columns in history tables
49. Should Compression Be Used?
• DSN1COMP
• Stand-alone utility
• Will estimate disk savings
• Works for table spaces and indexes
• Can be run against:
• Table space underlying VSAM dataset
• Index space underlying VSAM dataset
• Output from a full image copy
• Output from DSNCOPY
• Cannot be run against:
• LOB table spaces
• Catalog (DSNDB06)
• Directory (DSNDB01)
• Workfiles (i.e. DSNDB07)
• Using DSN1COMP with image copies and DSN1COPY outputs can
make gathering compression information unobtrusive
50. DSN1COMP Keywords
• Choose the keywords that best resemble your table
space
• PAGESIZE
• DSSIZE
• FREEPAGE
• PCTFREE should be set exactly as the object
you are running DSN1COMP against to insure
that its estimates are accurate
51. DSN1COMP Keywords
• Choose the keywords that best resemble your table
space
• If the REORG utility is used to build dictionary,
specify REORG at run-time
• Omitting REORG keyword defaults to LOAD utility
• DSN1COMP input a full image copy
• Specify FULLCOPY keyword to obtain the correct
results
52. REORG SHRLEVEL CHANGE Warning
• When choosing a VSAM LDS to run against, be
careful if you are using online REORG (REORG
SHRLEVEL CHANGE)
• Online REORG flips between the I0001 and
J0001 for the fifth qualifier of the VSAM data
sets. You can query the IPREFIX column in
SYSTABLEPART or SYSINDEXPART catalog
tables to find out which qualifier is in use.
53. DSN1COMP with an Index
• LEAFLIM keyword specifies the number of leaf
pages that should be scanned
• Omit LEAFLIM and the entire index will be scanned
• Specifying LEAFLIM could limit how long it will take
DSN1COMP to complete
54. DSN1COMP Table Space Output
• Statistics with and without compression, and the
percentage you should expect to save in kilobytes
• Number of rows scanned to build the dictionary
• Number of rows processed to deliver the statistics
in the report
• The average row length before and after
compression
• The size of the dictionary in pages
• The size of the table space in pages
• Before compression
• After compression
• Percentage of pages that would have been saved
55. DSN1COMP Report
DSN1944I DSN1COMP INPUT PARAMETERS
512 DICTIONARY SIZE USED
30 FREEPAGE VALUE USED
45 PCTFREE VALUE USED
NO ROWLIMIT WAS REQUESTED
ESTIMATE BASED ON DB2 LOAD METHOD
56. DSN1COMP Report
DSN1940I DSN1COMP COMPRESSION REPORT
301 KB WITHOUT COMPRESSION
224 KB WITH COMPRESSION
25 PERCENT OF THE BYTES WOULD BE SAVED
1,975 ROWS SCANNED TO BUILD DICTIONARY
4,665 ROWS SCANNED TO PROVIDE COMPRESSION ESTIMATE
4,096 DICTIONARY ENTRIES
81 BYTES FOR AVERAGE UNCOMPRESSED ROW LENGTH
52 BYTES FOR AVERAGE COMPRESSED ROW LENGTH
16 DICTIONARY PAGES REQUIRED
110 PAGES REQUIRED WITHOUT COMPRESSION
99 PAGES REQUIRED WITH COMPRESSION
10 PERCENT OF THE DB2 DATA PAGES WOULD BE SAVED
57. DSN1COMP Index Output
• Reports on the number of leaf pages scanned
• Number of keys and rids processed
• How many kilobytes of key data was processed
• Number of kilobytes of compressed keys produced
• Reports broken down for possible percent reduction
and buffer pool space usage for both 8K and 16K
index leaf page sizes
• Considerable help when determining correct leaf
page size
58. DSN1COMP Report
DSN1944I DSN1COMP INPUT PARAMETERS
PROCESSING PARMS FOR INDEX DATASET:
NO LEAFLIM WAS REQUESTED
59. DSN1COMP Report
DSN1940I DSN1COMP COMPRESSION REPORT
38 Index Leaf Pages Processed
3,000 Keys Processed
3,000 Rids Processed
401 KB of Key Data Processed
106 KB of Compressed Keys Produced
Cont’d on next page….
60. DSN1COMP Report
Cont’d from previous page…
EVALUATION OF COMPRESSION WITH DIFFERENT INDEX PAGE SIZES:
----------------------------------------------
8 K Page Buffer Size yields a
51 % Reduction in Index Leaf Page Space
The Resulting Index would have approximately
49 % of the original index’s Leaf Page Space
No Bufferpool Space would be unused
----------------------------------------------
----------------------------------------------
16 K Page Buffer Size yields a
74 % Reduction in Index Leaf Page Space
The Resulting Index would have approximately
26 % of the original index’s Leaf Page Space
3 % of Bufferpool Space would be unused to
ensure keys fit into compressed buffers
----------------------------------------------
61. References
• My Blog: “Getting the Most out of DB2 for z/OS and System z”
• http://blogs.ittoolbox.com/database/db2zos
• IBM Redbook, “DB2 for OS/390 and Data Compression” (SG24-5261)
• although a bit old (circa Nov 1998), it should answer most of your
remaining data compression questions
• “z/Architecture Principles of Operation” (SA22-7832)
• for a complete description of the CMPSC instruction
• “Enterprise Systems Architecture/390 Data Compression” (SA22-7208)
• RedPaper “Index Compression with DB2 9 for z/OS” (REDP-4345)
• “The Big Deal About Making Things Smaller: DB2 Compression”
• z/Journal Magazine, February/March 2008 issue
• IBM Journal of Research and Development
• Volume 46, Numbers 4/5, 2002 - IBM eServer z900
• “The microarchitecture of the IBM eServer z900 processor”
64. Session: TDZ-1064B - Data and Index Compression
Willie Favero
IBM Senior Certified Consulting IT Software Specialist
Dynamic Warehouse on System z Swat Team
IBM Silicon Valley Laboratory
IBM Certified Database Administrator - DB2 Universal Database V8.1 for z/OS
IBM Certified Database Administrator – DB2 9 for z/OS
IBM Certified DB2 9 System Administrator for z/OS
wfavero@us.ibm.com