This document discusses technology themes for DB2 in 2014 and beyond, including cost reduction, high availability, in-memory computing, skills availability, database commoditization, and big data. It summarizes DB2's focus on these areas today and potential future directions, such as further optimization to reduce software licensing fees, expanded data sharing capabilities, increased memory capacities, evolving skills needs, and continued integration with big data platforms. The document aims to help DB2 professionals consider strategies for addressing these themes.
Key Note Session IDUG DB2 Seminar, 16th April London - Julian Stuhler .Trito...Surekha Parekh
This document discusses technology themes for DB2 in 2014 and beyond, including cost reduction, high availability, in-memory computing, skills availability, database commoditization, and big data. It outlines current capabilities and future directions for DB2 on both z/OS and LUW platforms, emphasizing ongoing focus on reducing costs while improving availability, performance and analytics capabilities through techniques like in-memory computing and integration with big data technologies. The future of DB2 skills and the changing IT landscape are also addressed.
Flash Ahead: IBM Flash System Selling PointCTI Group
IBM's FlashSystem storage is designed to radically accelerate critical applications by providing consistent low latency flash performance. It can integrate with existing disk arrays to offload I/O-intensive workloads while improving overall performance. FlashSystem utilizes IBM's flash technology and software to deliver microsecond response times for applications such as databases, virtual infrastructures, and cloud computing. The FlashSystem family includes the all-flash 710, 720, 810, and 820 models that are optimized for performance, capacity, and mixed workloads.
IBM is the first major storage vendor to deliver eMLC Flash Storage Systems and has been incorporating flash into its servers and storage products for many years. This presentation explains the benefits of using IBM FlashSystems with I/O Intensive workloads where lower latency can make the difference; use cases include Online Transaction processing (OLTP), Business Intelligence (BI), Online Analytical Processing (OLAP), Virtual Desktop Infrastructure (VDI), High Performance Computing (HPC), Content delivery solutions (such as cloud storage and video on demand).
MT58 High performance graphics for VDI: A technical discussionDell EMC World
Hyper-converged infrastructure appliances can enable high end virtualized graphics for all of your users. With proper planning and configuring, the VxRail and Virtual SAN Ready Nodes with Horizon and GPU technology from NVIDIA provide enhanced user experiences. Even the most demanding CAD/CAM “power users” can realize multiple benefits from a virtualized desktop experience. Wyse endpoints complete the end-to-end environment with improved security and rich, rewarding user experiences. Learn best practices, planning, configuration and deployment recommendations to avoid implementation trials and tribulations in this technical session.
IBM FlashSystem is IBM's portfolio of all-flash storage arrays that provide ultra-low latency, high performance storage for transactional databases, virtualization, and other I/O intensive workloads. The arrays use custom FPGA technology and a layered data protection approach including chip-level ECC, variable stripe RAID, and 2D flash RAID to optimize performance while maintaining reliability. Models are available with SLC or eMLC flash and range in capacity from 1TB to over 1PB within a single rack. IBM FlashSystem can accelerate performance of Oracle, SAP, virtual servers and other applications by up to 12x over conventional storage.
Live Data: For When Data is Greater than MemoryMemVerge
At the Virtual HPC User Forum Special Event, Kevin Tubbs of Penguin Computing talks about a common HPC problem called DGM (data is greater than memory) and the Live Data solution incorporating Big Memory technology.
Dell EMC VMAX All Flash and VMAX3 – powered by the universally trusted Hypermax/Enginuity operating system - continues to revolutionize the ways organizations are deploying, provisioning, protecting, and managing enterprise storage. This interactive session allows attendees to discuss new Dell EMC VMAX features and functionality in an open forum with specialists and engineering leaders. Bring your questions and top of mind discussion topics for this always-lively session.
MT25 Server technology trends, workload impacts, and the Dell Point of ViewDell EMC World
As you modernize your data center and become future ready, your server requirements are changing. With innovations such as software-defined storage and networking, your compute platform is now more important than ever. Discover how the highly innovative Dell EMC PowerEdge portfolio is designed to meet the challenges of your future ready data center and how selecting the right compute platform can better enable you to deliver more efficient, secure and manageable IT for your business.
Key Note Session IDUG DB2 Seminar, 16th April London - Julian Stuhler .Trito...Surekha Parekh
This document discusses technology themes for DB2 in 2014 and beyond, including cost reduction, high availability, in-memory computing, skills availability, database commoditization, and big data. It outlines current capabilities and future directions for DB2 on both z/OS and LUW platforms, emphasizing ongoing focus on reducing costs while improving availability, performance and analytics capabilities through techniques like in-memory computing and integration with big data technologies. The future of DB2 skills and the changing IT landscape are also addressed.
Flash Ahead: IBM Flash System Selling PointCTI Group
IBM's FlashSystem storage is designed to radically accelerate critical applications by providing consistent low latency flash performance. It can integrate with existing disk arrays to offload I/O-intensive workloads while improving overall performance. FlashSystem utilizes IBM's flash technology and software to deliver microsecond response times for applications such as databases, virtual infrastructures, and cloud computing. The FlashSystem family includes the all-flash 710, 720, 810, and 820 models that are optimized for performance, capacity, and mixed workloads.
IBM is the first major storage vendor to deliver eMLC Flash Storage Systems and has been incorporating flash into its servers and storage products for many years. This presentation explains the benefits of using IBM FlashSystems with I/O Intensive workloads where lower latency can make the difference; use cases include Online Transaction processing (OLTP), Business Intelligence (BI), Online Analytical Processing (OLAP), Virtual Desktop Infrastructure (VDI), High Performance Computing (HPC), Content delivery solutions (such as cloud storage and video on demand).
MT58 High performance graphics for VDI: A technical discussionDell EMC World
Hyper-converged infrastructure appliances can enable high end virtualized graphics for all of your users. With proper planning and configuring, the VxRail and Virtual SAN Ready Nodes with Horizon and GPU technology from NVIDIA provide enhanced user experiences. Even the most demanding CAD/CAM “power users” can realize multiple benefits from a virtualized desktop experience. Wyse endpoints complete the end-to-end environment with improved security and rich, rewarding user experiences. Learn best practices, planning, configuration and deployment recommendations to avoid implementation trials and tribulations in this technical session.
IBM FlashSystem is IBM's portfolio of all-flash storage arrays that provide ultra-low latency, high performance storage for transactional databases, virtualization, and other I/O intensive workloads. The arrays use custom FPGA technology and a layered data protection approach including chip-level ECC, variable stripe RAID, and 2D flash RAID to optimize performance while maintaining reliability. Models are available with SLC or eMLC flash and range in capacity from 1TB to over 1PB within a single rack. IBM FlashSystem can accelerate performance of Oracle, SAP, virtual servers and other applications by up to 12x over conventional storage.
Live Data: For When Data is Greater than MemoryMemVerge
At the Virtual HPC User Forum Special Event, Kevin Tubbs of Penguin Computing talks about a common HPC problem called DGM (data is greater than memory) and the Live Data solution incorporating Big Memory technology.
Dell EMC VMAX All Flash and VMAX3 – powered by the universally trusted Hypermax/Enginuity operating system - continues to revolutionize the ways organizations are deploying, provisioning, protecting, and managing enterprise storage. This interactive session allows attendees to discuss new Dell EMC VMAX features and functionality in an open forum with specialists and engineering leaders. Bring your questions and top of mind discussion topics for this always-lively session.
MT25 Server technology trends, workload impacts, and the Dell Point of ViewDell EMC World
As you modernize your data center and become future ready, your server requirements are changing. With innovations such as software-defined storage and networking, your compute platform is now more important than ever. Discover how the highly innovative Dell EMC PowerEdge portfolio is designed to meet the challenges of your future ready data center and how selecting the right compute platform can better enable you to deliver more efficient, secure and manageable IT for your business.
Hypervisor-based VDI utilizes virtual machines running on hypervisors to provide desktop environments to users, while blade PCs allocate physical servers with each user having their own dedicated resources. The main differences are in performance, scalability, and cost - VDI has lower performance but higher density and flexibility, while blade PCs provide better performance through dedicated resources but have lower density and scalability. Administrative overhead and overall costs vary depending on the environment and needs of the organization.
Munich 2016 - Z011599 Martin Packer - More Fun With DDFMartin Packer
This document summarizes a presentation about analyzing DDF workloads using performance data. The presentation describes how to classify "alien" DB2 work coming through DDF and determine what is issuing the requests. It provides examples analyzing the behavior of different DDF clients, including identifying a CPU spike from one client and determining if another client is exhibiting "sloshing" behavior. The key lessons are that DDF management requires using WLM and application examination/tuning, and SMF 101 accounting trace records are important for instrumentation.
Why z/OS is a Great Platform for Developing and Hosting APIsTeodoro Cipresso
z/OS Connect Enterprise Edition makes it possible to create new value by enabling the creation of APIs that bring together multiple, disparate, z subsystem assets.
W22 - WebSphere Performance for Multicore and Virtualised PlatformsHendrik van Run
IBM European WebSphere Technical Conference 2010 presentation
The launch of IBM's POWER7 platform earlier this year continues a trend to multicore and multithreaded processor design. This has an impact on the hardware running in your datacenter, either today or in the near future. At the same time the deployment of virtualisation technology is becoming the new norm. Understanding the characteristics and best practices for these systems will enable you to maximise the investment in software and provide the best application performance. This enables developers, architects and system administrators to deliver solid applications on those platforms.
Live CEO Interview and Webinar Update on the State of DeduplicationStorage Switzerland
Learn From Two Deduplication Veterans: George Crump, Founder Storage Switzerland and Tom Cook, CEO Permabit:
* Are All Deduplication Methods the Same?
* Why is Dedupe so Valuable in the All-Flash Use Case?
* What Can Go Wrong Deduplication?
* Ask your deduplication questions to the dedupe panel!
The document discusses Novell's Virtual Desktop Infrastructure (VDI) solution called R.E.D.I. It notes that VDI has gone through a hype cycle, from a peak of inflated expectations to a trough of disillusionment as solutions became complex and costly. Novell's R.E.D.I. solution aims to help VDI climb the slope of enlightenment by providing an integrated approach leveraging Novell's strengths in identity management, systems management, and security to deliver a more optimized and productive VDI environment.
Z4R: Intro to Storage and DFSMS for z/OSTony Pearson
This session covers basic storage concepts for z/OS operating system with examples for Flash, Disk and Tape devices and how to use DFSMS policy-based management. Presented at IBM TechU in Johannesburg, South Africa September 2019
IBM Cloud Object Storage: How it works and typical use casesTony Pearson
This session covers the general concepts of object storage and in particular the IBM Cloud Object Storage offerings. Presented at IBM TechU in Johannesburg, South Africa September 2019
This document provides an overview of virtual desktop infrastructure (VDI) and its key components. VDI allows centralizing desktops in a data center for easier management, security and updates. It discusses the virtualization layer, connection brokers, client devices and remote access options. Live demos show examples of VDI solutions from VMware and Citrix. Benefits include cost savings, security, mobility and disaster recovery. Requirements like network performance, storage and connectivity are also reviewed.
This document discusses Citrix Presentation Server 4's new universal printer driver (UPD) which provides several improvements over previous versions:
- It is based on the enhanced metafile (EMF) format which allows printing to be 2-4 times faster and produces smaller print files.
- It supports all device printing options without needing individual drivers, eliminating driver management headaches.
- Eschelon Telecom upgraded to take advantage of the new UPD to simplify printing to their diverse printer fleet across various client environments and locations.
The document discusses the benefits of virtual desktops including improved data security, simplified data backup, simplified disaster recovery, reduced time to deployment, simplified PC maintenance, and flexibility of access. It notes that virtual desktops can enable thinner clients, move computational requirements to the datacenter, and allow access from anywhere there is authorized connectivity.
IBM recently announced the brand new Version of one of the industry's fastest Flash Storage Solution. The IBM Flashsystem 900. Now triple capacity and inline compression on top.
ID114 - Wrestling the Snake: Performance Tuning 101Wes Morgan
This document provides an overview of performance tuning for software systems. It discusses setting reasonable performance goals, understanding your system environment and usage patterns, identifying common performance issues, and taking a cyclical approach to ongoing monitoring and optimization. The key points are: set achievable performance goals based on peak demand, understand dependencies, data flows, and where load lies; monitor baseline performance and "red flags" like disk I/O, memory, and CPU usage; revisit performance regularly and after changes; and consider virtual environments, add-ons, OS patches, and hardware drivers.
MT147_Thinking Windows 10? Think simple, scalable, and secure deployments wit...Dell EMC World
Over 350M Windows 10 devices have been deployed in less than a year, and the recent Windows 10 anniversary update has accelerated the planning of Windows 10 rollouts for the vast majority of enterprises. This is the perfect time to evaluate your desktop deployment strategy. In this session, we will discuss the how VMware Horizon with Dell infrastructure can enable your journey to Windows 10, the benefits of centrally deploying Windows 10 through virtual desktops, and what this means for BYOD. We’ll also cover how the latest innovations from VMware and Dell can deliver simple, scalable, and secure Windows 10 deployments.
EMC XtremIO and Citrix XenDesktop provide an optimized virtual desktop infrastructure solution. XtremIO's all-flash storage delivers high performance, scalability, and predictable low latency required for large VDI deployments. Its agile copy services and data reduction features help reduce storage costs. Joint demonstrations showed XtremIO supporting thousands of desktops with sub-millisecond response times during boot storms and login storms. A unique plug-in streamlines the automated deployment and management of large XenDesktop environments using XtremIO's advanced capabilities.
Microsoft Server Virtualization and Private CloudMd Yousup Faruqu
The document discusses a technology leader with over 10 years of experience in Microsoft, VMware, and Citrix platforms including Windows, Active Directory, private cloud, server and desktop virtualization, high availability, BYOD, and other technologies. The individual holds several patents and certifications including in private cloud, VMware virtualization, Citrix XenDesktop/XenApp, and ITIL.
Cell/B.E. Servers: A Platform for Real Time Scalable Computing and VisualizationSlide_N
This document discusses IBM's Cell/B.E. servers as a platform for scalable real-time computing and visualization. It describes how Cell/B.E. servers can enable distributed, high-performance applications across networks through their low latency and high bandwidth capabilities. Examples of applications discussed include online gaming, virtual worlds, and medical imaging.
Performance case studies Common Europe june 2012COMMON Europe
This document provides an example of using IBM's Performance Data Investigator tool to analyze collection services data from an IBM Power Systems server. The analysis identifies a period of low CPU utilization that corresponded to a rise in operating system contention. Drilling further into the wait data revealed that most wait time was due to disk page faults from a database server job. The threads of that job were each waiting over 90% of their time. The analysis then identifies the user and database that the server job was processing data for. In the end, while some issues were diagnosed, further job watcher data would be needed to fully understand the source of identified machine level gate serialization waits.
This document discusses memory topics related to IBM System z, including:
- Paging subsystem design recommendations to avoid paging and allow full system dumps.
- Enhancements in z/OS R12 to improve dumping performance.
- Benefits of 1MB large pages for TLB coverage and various product exploitations.
- New z/OS R10 64-bit common area and RMF support for monitoring it.
- Considerations for coupling facility memory allocation for structures, dumps, and white space.
Re-architecting the Datacenter to Deliver Better Experiences (Intel)COMPUTEX TAIPEI
The document discusses Intel's efforts to re-architect datacenters to better meet growing demands and enable new digital experiences. Key points include:
- Convergence of cloud, big data, and connected devices is driving new user experiences
- Intel is reducing cost, complexity and power consumption by re-architecting the datacenter at the rack and system level
- Intel's broad portfolio of compute, storage, networking and software technologies allow it to optimize workloads and deliver better performance
Consultancy on Demand is a specially designed service for customers who need varying levels of DB2 support throughout the year.
You purchase a block of 20, 50 or 100 hours. You can then call off hours as and when you need them. No commitment required!
PowerSchool will replace SASI as the new statewide student information system beginning in February. All grades must be submitted in SASI by January 19, and attendance and grades will need to be recorded manually until the conversion is complete. Teachers should complete online training modules in PowerSchool's Gradebook and Portal features by the end of January to familiarize themselves with the new system. Quick tutorial refresher videos are also available on PowerSource for teachers to access. Coaches are available to assist teachers who have trouble accessing the training modules.
Hypervisor-based VDI utilizes virtual machines running on hypervisors to provide desktop environments to users, while blade PCs allocate physical servers with each user having their own dedicated resources. The main differences are in performance, scalability, and cost - VDI has lower performance but higher density and flexibility, while blade PCs provide better performance through dedicated resources but have lower density and scalability. Administrative overhead and overall costs vary depending on the environment and needs of the organization.
Munich 2016 - Z011599 Martin Packer - More Fun With DDFMartin Packer
This document summarizes a presentation about analyzing DDF workloads using performance data. The presentation describes how to classify "alien" DB2 work coming through DDF and determine what is issuing the requests. It provides examples analyzing the behavior of different DDF clients, including identifying a CPU spike from one client and determining if another client is exhibiting "sloshing" behavior. The key lessons are that DDF management requires using WLM and application examination/tuning, and SMF 101 accounting trace records are important for instrumentation.
Why z/OS is a Great Platform for Developing and Hosting APIsTeodoro Cipresso
z/OS Connect Enterprise Edition makes it possible to create new value by enabling the creation of APIs that bring together multiple, disparate, z subsystem assets.
W22 - WebSphere Performance for Multicore and Virtualised PlatformsHendrik van Run
IBM European WebSphere Technical Conference 2010 presentation
The launch of IBM's POWER7 platform earlier this year continues a trend to multicore and multithreaded processor design. This has an impact on the hardware running in your datacenter, either today or in the near future. At the same time the deployment of virtualisation technology is becoming the new norm. Understanding the characteristics and best practices for these systems will enable you to maximise the investment in software and provide the best application performance. This enables developers, architects and system administrators to deliver solid applications on those platforms.
Live CEO Interview and Webinar Update on the State of DeduplicationStorage Switzerland
Learn From Two Deduplication Veterans: George Crump, Founder Storage Switzerland and Tom Cook, CEO Permabit:
* Are All Deduplication Methods the Same?
* Why is Dedupe so Valuable in the All-Flash Use Case?
* What Can Go Wrong Deduplication?
* Ask your deduplication questions to the dedupe panel!
The document discusses Novell's Virtual Desktop Infrastructure (VDI) solution called R.E.D.I. It notes that VDI has gone through a hype cycle, from a peak of inflated expectations to a trough of disillusionment as solutions became complex and costly. Novell's R.E.D.I. solution aims to help VDI climb the slope of enlightenment by providing an integrated approach leveraging Novell's strengths in identity management, systems management, and security to deliver a more optimized and productive VDI environment.
Z4R: Intro to Storage and DFSMS for z/OSTony Pearson
This session covers basic storage concepts for z/OS operating system with examples for Flash, Disk and Tape devices and how to use DFSMS policy-based management. Presented at IBM TechU in Johannesburg, South Africa September 2019
IBM Cloud Object Storage: How it works and typical use casesTony Pearson
This session covers the general concepts of object storage and in particular the IBM Cloud Object Storage offerings. Presented at IBM TechU in Johannesburg, South Africa September 2019
This document provides an overview of virtual desktop infrastructure (VDI) and its key components. VDI allows centralizing desktops in a data center for easier management, security and updates. It discusses the virtualization layer, connection brokers, client devices and remote access options. Live demos show examples of VDI solutions from VMware and Citrix. Benefits include cost savings, security, mobility and disaster recovery. Requirements like network performance, storage and connectivity are also reviewed.
This document discusses Citrix Presentation Server 4's new universal printer driver (UPD) which provides several improvements over previous versions:
- It is based on the enhanced metafile (EMF) format which allows printing to be 2-4 times faster and produces smaller print files.
- It supports all device printing options without needing individual drivers, eliminating driver management headaches.
- Eschelon Telecom upgraded to take advantage of the new UPD to simplify printing to their diverse printer fleet across various client environments and locations.
The document discusses the benefits of virtual desktops including improved data security, simplified data backup, simplified disaster recovery, reduced time to deployment, simplified PC maintenance, and flexibility of access. It notes that virtual desktops can enable thinner clients, move computational requirements to the datacenter, and allow access from anywhere there is authorized connectivity.
IBM recently announced the brand new Version of one of the industry's fastest Flash Storage Solution. The IBM Flashsystem 900. Now triple capacity and inline compression on top.
ID114 - Wrestling the Snake: Performance Tuning 101Wes Morgan
This document provides an overview of performance tuning for software systems. It discusses setting reasonable performance goals, understanding your system environment and usage patterns, identifying common performance issues, and taking a cyclical approach to ongoing monitoring and optimization. The key points are: set achievable performance goals based on peak demand, understand dependencies, data flows, and where load lies; monitor baseline performance and "red flags" like disk I/O, memory, and CPU usage; revisit performance regularly and after changes; and consider virtual environments, add-ons, OS patches, and hardware drivers.
MT147_Thinking Windows 10? Think simple, scalable, and secure deployments wit...Dell EMC World
Over 350M Windows 10 devices have been deployed in less than a year, and the recent Windows 10 anniversary update has accelerated the planning of Windows 10 rollouts for the vast majority of enterprises. This is the perfect time to evaluate your desktop deployment strategy. In this session, we will discuss the how VMware Horizon with Dell infrastructure can enable your journey to Windows 10, the benefits of centrally deploying Windows 10 through virtual desktops, and what this means for BYOD. We’ll also cover how the latest innovations from VMware and Dell can deliver simple, scalable, and secure Windows 10 deployments.
EMC XtremIO and Citrix XenDesktop provide an optimized virtual desktop infrastructure solution. XtremIO's all-flash storage delivers high performance, scalability, and predictable low latency required for large VDI deployments. Its agile copy services and data reduction features help reduce storage costs. Joint demonstrations showed XtremIO supporting thousands of desktops with sub-millisecond response times during boot storms and login storms. A unique plug-in streamlines the automated deployment and management of large XenDesktop environments using XtremIO's advanced capabilities.
Microsoft Server Virtualization and Private CloudMd Yousup Faruqu
The document discusses a technology leader with over 10 years of experience in Microsoft, VMware, and Citrix platforms including Windows, Active Directory, private cloud, server and desktop virtualization, high availability, BYOD, and other technologies. The individual holds several patents and certifications including in private cloud, VMware virtualization, Citrix XenDesktop/XenApp, and ITIL.
Cell/B.E. Servers: A Platform for Real Time Scalable Computing and VisualizationSlide_N
This document discusses IBM's Cell/B.E. servers as a platform for scalable real-time computing and visualization. It describes how Cell/B.E. servers can enable distributed, high-performance applications across networks through their low latency and high bandwidth capabilities. Examples of applications discussed include online gaming, virtual worlds, and medical imaging.
Performance case studies Common Europe june 2012COMMON Europe
This document provides an example of using IBM's Performance Data Investigator tool to analyze collection services data from an IBM Power Systems server. The analysis identifies a period of low CPU utilization that corresponded to a rise in operating system contention. Drilling further into the wait data revealed that most wait time was due to disk page faults from a database server job. The threads of that job were each waiting over 90% of their time. The analysis then identifies the user and database that the server job was processing data for. In the end, while some issues were diagnosed, further job watcher data would be needed to fully understand the source of identified machine level gate serialization waits.
This document discusses memory topics related to IBM System z, including:
- Paging subsystem design recommendations to avoid paging and allow full system dumps.
- Enhancements in z/OS R12 to improve dumping performance.
- Benefits of 1MB large pages for TLB coverage and various product exploitations.
- New z/OS R10 64-bit common area and RMF support for monitoring it.
- Considerations for coupling facility memory allocation for structures, dumps, and white space.
Re-architecting the Datacenter to Deliver Better Experiences (Intel)COMPUTEX TAIPEI
The document discusses Intel's efforts to re-architect datacenters to better meet growing demands and enable new digital experiences. Key points include:
- Convergence of cloud, big data, and connected devices is driving new user experiences
- Intel is reducing cost, complexity and power consumption by re-architecting the datacenter at the rack and system level
- Intel's broad portfolio of compute, storage, networking and software technologies allow it to optimize workloads and deliver better performance
Consultancy on Demand is a specially designed service for customers who need varying levels of DB2 support throughout the year.
You purchase a block of 20, 50 or 100 hours. You can then call off hours as and when you need them. No commitment required!
PowerSchool will replace SASI as the new statewide student information system beginning in February. All grades must be submitted in SASI by January 19, and attendance and grades will need to be recorded manually until the conversion is complete. Teachers should complete online training modules in PowerSchool's Gradebook and Portal features by the end of January to familiarize themselves with the new system. Quick tutorial refresher videos are also available on PowerSource for teachers to access. Coaches are available to assist teachers who have trouble accessing the training modules.
The document discusses IBM's pureScale technology which allows DB2 databases to scale up to 128 nodes for high availability and scalability. PureScale forms a shared-disk cluster and uses proven "data sharing" technology from DB2 for z/OS. It provides agility to rapidly scale up or down capacity as needed with little application change. The company Triton built a basic 2-node pureScale cluster within a budget of under £1K to validate IBM's claims and gain hands-on experience. Their testing showed the cluster delivered 1000 transactions per second under load. The summary concludes that pureScale provides robust clustering with excellent price/performance.
Final Public Mtg Presentation Jan 11 15 2010guest896a20
The document presents several options for reassigning students to different elementary, middle, and high schools in Beaufort County for the 2010-2011 school year. It provides details on proposed new attendance zones and demographic projections for each option. The objectives are to reduce overcrowding at some schools, increase diversity, and comply with review from the Office of Civil Rights. Staff will continue gathering data and public input before making a final recommendation to the Board of Education.
PowerSchool will replace SASI as the new statewide student information system beginning in February. All grades must be submitted in SASI by January 19. Between January 20 and the PowerSchool conversion in early February, attendance and grades must be recorded manually. Teachers must complete online training modules in PowerSchool's Gradebook and Portal features by the end of January. Quick tutorial refresher videos are also available. Coaches are available to assist teachers in accessing the required training modules.
Latar Belangan Sejarah Antropologi
• Ilmu Antropologi termasuk ilmu-ilmu sosial
yang lain mempunyai sejarah tersendiri.
• Antropologi disebut ilmu yang baru atau
muda karena perkembangan antropologi
relatif baru, sedangkan antopologi disebut ilmu yang tua karena sejarahnya terutama bagian antropologi yang disebut dengan
Etnografi telah dikerjakan orang dari
berbagai bangsa di dunia sudah lebih dari
500 tahun yang lalu.
TDWI San Diego 2014: Wendy Lucas Describes how BLU Acceleration Delivers In-T...IBM Analytics
Originally Published on Sep 25, 2014
Do you experience the snowball effect where you deliver one analytics report and your organization thinks of another and another they need? BLU Acceleration in-memory computing can help. It processes analytics queries at lightning fast speeds, and is a simple to use "load and go" solution. Learn more in this presentation delivered at TDWI San Diego on September 24, 2014.
One of the most CPU resource intensive, and highly used functions, within every IT environment is sorting. Virtually every application needs to sort data, as well as copy information from one location to another. Utilization of zIIP engines for sort, copy, and compression provides an organization with additional processing capacity and can limit the expansion of the 4-Hour Rolling Average of LPAR Hourly MSU Utilization (4HRA) which will prevent an in increase Monthly License Charges (MLC).
View this IBM Systems Magazine webcast to learn:
• How to assess what's driving peaks in your 4HRA
• 5 advantages of offloading workloads to zIIP engines
• Examples of Fortune 500 companies maximizing their mainframe through zIIP exploitation
S de0882 new-generation-tiering-edge2015-v3Tony Pearson
IBM offers a variety of storage optimization technologies that balance performance and cost. This session covers Easy Tier, Storage Analytics, and Spectrum Scale.
- Centering your enterprise for growth discusses recent innovations with IMS that improve performance, affordability, and simplicity. These enhancements help clients optimize and innovate with IMS to support business growth in an era of data, cloud, and mobile engagement.
- Key highlights include a 117,292 TPS and 130% increased workload throughput on z13, as well as pricing innovations like IMS Value Unit Editions to enable new analytics and mobile workloads.
- The document emphasizes how optimizing IMS can help clients accelerate insights from transaction data, rapidly enable cloud and mobile access, and expand the IMS talent population to continue delivering value.
This document reviews IBM's DB2 database software. It provides details on IBM's history, DB2 features such as in-memory optimization and SQL compatibility, benefits like lower costs and improved performance, and examples of how DB2 can be used in industries like insurance, retail, banking, and telecommunications. Hardware, software, and cloud-based deployment requirements are also summarized.
Unlock Cost Savings for your IBM z Systems EnvironmentPrecisely
For many larger enterprises, the pressure to squeeze more performance out of existing mainframes, while avoiding or deferring major upgrades, is a top priority. Routine sort processing tasks such as sort, copy, merge, and compression take up a large and disproportionate share of CPU processing time and related mainframe resources. They are thus obvious areas to examine when seeking relief from mainframe cost and performance pressures.
When evaluating potential solutions many organizations want to “try before they buy” to ensure strong ROI. We have developed tools to help you understand your batch processing and calculate your expected ROI.
Watch this on-demand webinar to hear about:
• How our Syncsort MFX solution drive cost savings
• What a “try before you buy” evaluation looks like
• How to analyze results for your specific situation
IBM Z Cost Reduction Opportunities. Are you missing out?Precisely
Large companies continue to use mainframes for their most business-critical IT workloads. For these companies, finding ways to get more bang for the mainframe buck, in terms of both costs and performance, is always a high priority. Several converging trends in recent years have made it more challenging than ever to achieve the needed organizational performance at the best possible price point. IT leaders in mainframe departments are seeking out ways to speed processing, especially mundane processing tasks such as sorting, copying, merging, compression, and report generation.
Whether you are looking to get more value from your mainframe investment with enhanced performance, improved efficiency, or modernization, Precisely has multiple solutions for customer running IBM Z Systems that can have a dramatic impact on cost and efficiency.
Watch this on-demand webinar to learn about:
• Optimizing mainframe sort workloads
• Leveraging your zIIP processors
• Modernizing your database environment
• Improving visibility into mainframe processing
The flash market started out monolithically. Flash was a single media type (high performance, high endurance SLC flash). Flash systems also had a single purpose of accelerating the response time of high-end databases. But now there are several flash options. Users can choose between high performance flash or highly dense, medium performance flash systems. At the same time, high capacity hard disk drives are making a case to be the archival storage medium of choice. How does an IT professional choose?
Iod session 3423 analytics patterns of expertise, the fast path to amazing ...Rachel Bland
This document provides an overview of the IBM Business Intelligence Pattern with BLU Acceleration. It discusses how this pattern provides a pre-configured deployment for a predictable, high performance analytics solution. It delivers order of magnitude improvements in performance, storage savings, and time to value through the use of in-memory acceleration technologies like Dynamic Cubes and DB2 BLU. Typical performance improvements range from 8-25x over traditional approaches. The pattern allows for a simple, streamlined approach to achieve fast analytics results.
Unstructured data is growing at a staggering rate. It is breaking traditional storage and IT budgets and burying IT professionals under a mountain of operational challenges. Listen as Cloudian and Storage Switzerland discuss panel-style discussion the seven key reasons why organizations can dramatically lower storage infrastructure costs by deploying a hardware-agnostic object storage solution instead of sticking with legacy NAS.
I hosted a webcast with Sr. VP and GM of HP Storage David Scott. David and I talked about flash-optimized storage and the software defined data center. You can find the audio for the webcast at http://hpstorage.me/ASTB-podcasts - they are number 146 and 147.
Greenplum was founded in 2003 and later acquired by EMC Corporation. EMC positioned Greenplum as the foundation of its new Data Computing Division due to Greenplum's massively parallel processing (MPP) architecture and expertise in handling large volumes of data. Greenplum provides a high performance database for data warehousing and analytics through its shared-nothing architecture and ability to scale linearly by adding more nodes.
Cloud computing is the fifth generation of computing that allows applications to be accessed from anywhere via the internet. It is projected to grow six times faster than traditional IT spending, reaching $42 billion by 2012. Key benefits include lower upfront and ongoing costs, easier application access, and improved datacenter utilization. However, security concerns, latency issues, and lack of control present barriers for some applications. Private enterprise clouds can provide cloud advantages internally while addressing barriers through server virtualization, availability, and control over resource allocation.
DB2 pureScale provides a highly scalable and available database solution. It allows customers to start small and grow capacity easily by adding additional cluster members without disrupting applications or incurring extra costs. DB2 pureScale uses a shared nothing architecture with each member running on its own server. It provides a single system view to clients and automatically balances workload across members. Critical features include unlimited scalability, continuous availability even during member failures, and the ability to perform maintenance without outages.
Rob Callaghan_OOW14 IO Performance for DatabaseRob Callaghan
1. The document discusses expanding storage possibilities for enterprises, consumers, and mobile/connected devices through ultra-low latency and scalable I/O performance solutions.
2. It notes that forward-looking statements were made and actual results may differ from projections.
3. It outlines how legacy storage I/O bottlenecks have widened the performance gap between server CPUs and storage. Flash storage provides a solution to eliminate these bottlenecks.
Simplify Data Management and Go Green with Supermicro & QumuloRebekah Rodriguez
Data is growing faster than existing systems are designed to ingest and then analyze. As a result, storage sprawl, wasted resources, and time-consuming complexity are holding back employees and customers from making better business decisions. Supermicro and Qumulo have teamed up to create a simple, sustainable, and fast system to store and manage massive amounts of unstructured data.
Join this webinar to learn how to bring a highly performant and dense infrastructure platform that meets business requirements by taming unstructured data management challenges with Qumulo and Supermicro.
Watch the webinar: https://www.brighttalk.com/webcast/17278/513928
POWER8 the x86 Server Farm - IBM Business Partners use POWER8 to Lower Client...Paula Koziol
Discover ISV solutions built using Linux on IBM POWER8 that will lower your Total Cost of Ownership compared to alternative platform technologies. Gain a competitive advantage with these solutions that analyze more data faster and improve performance at a much lower cost. Linux on IBM POWER8 is the only infrastructure that provides both scale-out and scale-up capabilities that optimizes the efficiency of your workloads while reducing your costs, allowing you to handle any business challenge.
This presentation highlights several high value solutions we have developed with Independent Software Vendors (ISV) IBM Business Partners. Hear directly from our ISV partners on the partnership and the value we deliver to our joint customers.
Visit http://www.ibm.com/power for more information.
Also visit the IBM Systems ISVs YouTube video channel for additional ISV Solutions on IBM Power Systems: https://www.youtube.com/playlist?list=PLN0Az2aq_pjgJs-1fFX0fw4BMRy2GeaHC
Follow us at @IBMSystemsISVs (https://twitter.com/IBMSystemsISVs)
ProfitBricks Cloud Computing IaaS An IntroductionProfitBricks
An introduction to ProfitBricks Cloud Computing IaaS. ProfitBricks is the IaaS provider that offers a painless cloud experience for all IT users, with no learning curve. ProfitBricks boasts flexible cloud servers and networking, an integrated Data Center Designer tool for visual control over the cloud and the best price/performance value available. ProfitBricks was named one of the coolest cloud providers of 2015 by CRN and was also the recipient of two CODiE awards and a Frost & Sullivan Cloud innovation award for 2014.
Similar to A Time Traveller's Guide to DB2: Technology Themes for 2014 and Beyond (20)
This document discusses a security issue that occurred when improperly configuring DB2 federation. Specifically:
1. A client site configured DB2-LDAP federation but also enabled the FED_NOAUTH parameter, bypassing authentication.
2. This meant any user could connect to the database as any other user without providing the correct password.
3. If the database owner username was guessed, full access to all data could be obtained, potentially exposing the database to a major security breach.
The issue was caused by incorrectly enabling the FED_NOAUTH parameter when federation was set up. Proper authentication should have occurred at the database rather than being bypassed. The moral is to not enable
What do you do when disaster strikes? In part 9 of our DB2 Support Nightmare series we look at another DB2 disaster scenario and how it was resolved by the experts at Triton Consulting.
Number 8 in our Top 10 DB2 Support Nightmares series. This month we take a look at what happens when organisations are not able to keep up to date with the latest DB2 technology.
Imagine the scene – a broken database on an unsupported version of DB2, with no backups or log files to recover the database.
Yes – this one really was the stuff of nightmares!
Download if you dare! In part six of our DB2 Nightmares series we see what can happen when an experienced DBA goes on holiday leaving the Junior DBA in charge with no support.
A junior DBA accidentally deleted all rows from a critical table in a pre-production environment. The DBA had connected to the wrong system and used the instance owner userid. The system administrator had enabled the FED_NOAUTH parameter, which bypasses authentication at the instance level. This meant any user could connect as any other user without the correct password and impact the database. The moral is that unintended consequences can occur from small configuration changes and it is important to get skilled DB2 support.
Db2 10 memory management uk db2 user group june 2013 [read-only]Laura Hood
DB2 10 provides significant enhancements to memory management that allow for much greater scalability. Key changes include moving most objects above the 2GB bar, enabling larger buffer pools through 1MB page support, and enhanced real storage monitoring. Migrating to DB2 10 requires ensuring sufficient real storage is available, monitoring real storage usage, and addressing other limiting factors before taking advantage of new features to further scale vertically.
DbB 10 Webcast #3 The Secrets Of ScalabilityLaura Hood
The third in the Migration Month webcast series looking at DB2 10 migration planning. This webcast goes into the scalability benefits available in DB2 10, with Julian Stuhler of Triton Consulting & Jeff Josten of IBM.
DB2 10 Webcast #2 - Justifying The UpgradeLaura Hood
This document discusses justifying an upgrade from DB2 9 or 8 to DB2 10 for z/OS. It outlines potential CPU, productivity, and availability savings from the upgrade. CPU savings can come from improved performance in conversion mode through features like high performance database application transition support. Productivity savings may result from features that improve plan stability and temporal tables. Availability improvements like online reorganization of LOBs can reduce downtime costs. The presentation recommends using IBM's DB2 10 Business Value Assessment Estimator Tool to quantify specific savings for an organization.
DB2 10 for z/OS introduced temporal data support which allows applications to query data as it existed at different points in time. The document discusses system temporal tables, business temporal tables, and bi-temporal tables. It provides examples of temporal DDL, SELECT extensions for querying historical data, and discusses early experiences and performance considerations with temporal data in DB2 10.
DB2DART is a tool that allows DBAs to inspect, format, and repair DB2 databases and objects. It can be used to handle storage reclamation issues by lowering high water marks, detect and repair index corruption, extract data from corrupt tables, and remove backup pending states. DB2DART provides granular analysis at the database, tablespace, and table level and its repair capabilities save DBAs from having to call support or restore from backups in many cases.
Temporal And Other DB2 10 For Z Os HighlightsLaura Hood
The document discusses DB2 10 for z/OS and its new temporal data support feature. It provides an overview of DB2 10, describing new features such as temporal data, virtual storage enhancements, and optimizer enhancements. It then discusses temporal data concepts in more detail, including temporal tables, periods, business temporal tables and system temporal tables. The document provides examples and explains how to implement temporal tables in DB2 10. It concludes by listing further reading materials on DB2 10.
DB210 Smarter Database IBM Tech Forum 2011Laura Hood
DB2 10 for z/OS is a new version of IBM's database software that provides significant performance improvements, new security and temporal data features, and easier migration paths from prior versions. Key enhancements in DB2 10 include 5-20% CPU reductions, up to 10x more threads per subsystem due to virtual storage improvements, row and column access controls, and built-in support for tracking historical data. Customers running DB2 8 or 9 can upgrade directly to DB2 10 using new "skip migration" functionality, or upgrade sequentially from earlier versions. Migrating to DB2 10 requires meeting prerequisites and following steps to move to conversion mode and then normal mode.
Pure Genius: How To Get Mainframe-Like Scalability & Availability For Midrange DB2 discusses pureScale, an optional feature for DB2 that implements shared-disk clustering to provide high scalability and availability. It can support up to 128 members. The architecture uses a shared database, coordination facilities, and InfiniBand networking. Customers experience scalability gains, easy installation, and resilience like continued operation despite coordination facility failure. The presentation evaluates pureScale's benefits and customer experiences.
Episode 4 DB2 pureScale Performance Webinar Oct 2010Laura Hood
DB2 pureScale provides scalability and high performance through its clustered database architecture. It uses a cluster caching facility to manage data consistency across member nodes and leverage low-latency interconnects like InfiniBand. The architecture features two-level buffer pool caching between local and global pools for improved read performance. Monitoring and tuning focuses on optimizing buffer pool hit ratios at both levels. Initial proof points showed near-linear scalability up to 12 nodes and over 80% scalability even at 128 nodes, demonstrating the architecture's ability to transparently scale database workloads across many servers.
DB2 pureScale provides high availability and continuous operations by automatically recovering from component failures through workload redistribution and fast in-flight transaction recovery. It protects databases by balancing workloads across nodes and uses duplexed secondary components to tolerate multiple simultaneous node failures while keeping other nodes online and services available.
DB2 pureScale provides unlimited scalability, application transparency, and continuous availability for transaction processing and ERP workloads. It uses a shared-nothing architecture where multiple database instances (members) connect to a single database and cooperate to provide a single system image to clients. PowerHA pureScale technology handles global bufferpool and locking management to maintain data consistency as members scale out.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Infrastructure Challenges in Scaling RAG with Custom AI modelsZilliz
Building Retrieval-Augmented Generation (RAG) systems with open-source and custom AI models is a complex task. This talk explores the challenges in productionizing RAG systems, including retrieval performance, response synthesis, and evaluation. We’ll discuss how to leverage open-source models like text embeddings, language models, and custom fine-tuned models to enhance RAG performance. Additionally, we’ll cover how BentoML can help orchestrate and scale these AI components efficiently, ensuring seamless deployment and management of RAG systems in the cloud.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
A Time Traveller's Guide to DB2: Technology Themes for 2014 and Beyond
1. #IDUG
A Time Traveller’s Guide to DB2:
Technology Themes for 2014 and
Beyond
Julian Stuhler
Principal Consultant
Triton Consulting
2. #IDUG
Disclaimer
• Any mention of future features, products or overall strategic direction are purely my personal
opinion, and no reliance should be placed upon them ever coming to pass
• The user assumes the entire risk related to its use of information in this presentation. Triton
Consulting provides such information "as is," and disclaims any and all warranties, whether
express or implied, including (without limitation) any implied warranties of merchantability or
fitness for a particular purpose. In no event will Triton Consulting be liable to you or to any
third party for any direct, indirect, incidental, consequential, special or exemplary damages or
lost profit resulting from any use or misuse of this data.
• DB2 for OS/390 and DB2 for z/OS are trademarks of International Business Machine
corporation. This presentation uses many terms that are trademarks. Wherever we are
aware of trademarks the name has been spelled in capitals.
2
“Prediction is very difficult, especially if it's about
the future”
Nils Bohr, Nobel laureate in Physics
4. #IDUG
DB2 Today
4
“The only constant is change”
Heraclitus
Greek Philosopher
(c.535 BC – 475 BC)
“Good character is not formed in a week or a
month. It is created little by little, day by day.
Protracted and patient effort is needed to develop
good character.”
DBAs are
DBAs.”
They are
5. #IDUG
DB2 Today
5
June 1 2008 Sept 24 2013
Worldles generated by Tagxedo
http://http://www.tagxedo.com
Old IBM website from Wayback
Machine
http://web.archive.org http://www-01.ibm.com/software/data/
7. #IDUG
DB2 Technology Themes
• Cost Reduction
• High Availability
• In-Memory Computing
• DB2 Skills Availability
• Database Commoditisation
• Big Data
7
8. #IDUG
Cost Reduction – Today
• Ongoing focus on improving profitability and ruthlessly
eliminating unnecessary costs
• IT spending is a major cost component for all organisations
• Gartner’s 2013 Worldwide IT Spending analysis showed growth rate
of just 0.4% for 2013
• Managing hardware costs
• Moore’s law is still alive and well as it approaches its 50th birthday
• Compression can dramatically reduce DASD costs
• Adaptive compression in DB2 for LUW V10 can yield spectacular gains
• Further savings possible via actionable compression in BLU
• Overall cost savings being partially offset by move to more expensive
SSD devices (although they are getting cheaper too)
• Virtualisation and consolidation technologies are helping to improve
hardware utilisation rates
• Linux on System z offers some intriguing possibilities here
8
9. #IDUG
Cost Reduction – Today
• Managing Software licence fees
• MLC pricing on the mainframe means that CPU burned during peak period
(4HRA) directly impacts software costs
• Ongoing focus within DB2 for z/OS to drive down
CPU consumption
• DB2 code optimisation in DB2 10 (and now in DB2 11)
• Increased use of System z speciality engines and
hybrid solutions such as the IBM DB2 Analytic
Accelerator
• Aggressive new packaging options for DB2 for LUW
• AWSE and AESE include lots of additional functionality
such as compression, BLU, pureScale, etc
• Linux on System z can offer major software licence savings
• Managing people costs
• Salary increases have been generally outstripping increase in overall IT
spend, so we’re all consuming a greater proportion of the IT budget
• From ALTER to Autonomics, it’s all about improving productivity and doing
more with less
9
10. #IDUG
Cost Reduction – Tomorrow
• Signs that pressure is easing on
overall IT budgets
• Latest Gartner estimates show 3.2%
annual increase for 2014, to $3.8
trillion
• 6.9% increase in enterprise software
spending, with CRM, DBMS and data
management the major items
• However, Gartner expects a
renewed focus on implementing
new IT systems which will consume
budget
• Current cost management pressures
unlikely to reduce
10
11. #IDUG
Cost Reduction – Tomorrow
• Hardware
• Moore’s Law under pressure, only has 6-8 years left before physics
dictate fundamental shift from CMOS to other technologies
• Photonics, quantum computers
• Ongoing focus on reducing operational
costs will continue to deliver benefits
• Recent Intel POC submerged high-end
servers in 3M’s dielectric “Novec
Engineered Fluid” to increase server
density and cut cooling costs by up
to 95%
• System z hardware approaching thermal limits for indirect cooling so
mainframes may go this way too
11
12. #IDUG
Cost Reduction – Tomorrow
• Software Licence Fees
• Increased offload to zIIP, IDAA and other speciality processors and hybrid
solutions
• But what happens when we approach 100% CP offload?
• New MLC models to recognise the changing role of the mainframe
• IBM announcement on 8th April 2014 for new model offering up to 60% reduction on
processor capacity reported for Mobile transactions http://www-
03.ibm.com/press/uk/en/pressrelease/43619.wss
• Practice of “bundling” likely to continue as a way of maintaining software
revenues on distributed platforms
• People Costs
• Skills shortages likely to continue to increase people costs. See skills section
later
• Continued emphasis on autonomics, ease of use and productivity features
12
13. #IDUG
High Availability – Today
• Impact of down time in critical IT systems has never been
higher
• Revenue loss
• Reputational damage
• Remedial costs
• Regulatory and Contract Compliance Impact
• How much?
• A 2011 Ponemon Institute report calculated average of $5,617 per
minute for large US data centres
• Amazon “went dark” for 49 minutes in Jan 2013, at estimated cost of
$66,240 per minute
• Unplanned outage is usually the most painful, but planned
outage hurts too
13
14. #IDUG
High Availability – Today
• Relax, you’re working with IBM – DB2 on both platforms is
in good shape for reducing unplanned outage
• Data Sharing on DB2 for z/OS is mature and generally much
better understood by customers than it used to be
• “Gold standard” for continuous availability
• DB2 11 for z/OS contains some valuable new performance
enhancements
• DB2 for LUW pureScale feature implements similar architecture
• Included in AWSE and AESE
• Until recently pureScale supported only on IBM POWER and System
x servers, but as of DB2 10.1 FP2 or DB2 10.5 FP1, non-IBM x86
servers also supported
14
15. #IDUG
High Availability – Today
• Eliminating planned outage is an ongoing challenge, but
news is generally good and improving all of the time
• Schema change
• Housekeeping
• Preventative maintenance
• Version upgrades
15
16. #IDUG
High Availability – Tomorrow
• Further data sharing and GDPS enhancements for DB2
for z/OS to re-open the gap with competitors
• Continued expansion of dynamic schema change
capabilities for LUW and z/OS
• Online version upgrades
• Further strides towards truly online version upgrades for DB2 for
z/OS
• First steps for pureScale
16
17. #IDUG
In-Memory Computing – Today
• Disk access speeds are increasing, but processor speeds are increasing
at an even greater rate
• Therefore, relative “cost” of I/O operations is getting bigger
• Even new (expensive) SSDs are orders of magnitude slower than accessing
processor storage
• Caching data in memory avoids I/O
• Improves elapsed time
• Reduces CPU
• Reduces operational cost
• Allows novel access patterns to be used
• Availability of NAND / flash memory
reduces impact if I/O is required
• SSD
• Flash Express
• Pricing is volatile/complicated,
but memory is a one-off cost
17
DASD - CacheDASD - Cache
DASD - DiskDASD - Disk
Nanoseconds (10-9)
<2 milliseconds (10-3)
>5 milliseconds
Buffer
Pool
18. #IDUG
In-Memory Computing – Today
• OLTP
• Today’s server platforms can cache large amounts of data in memory
• zEC12 can support up to 3TB per CEC (1TB per LPAR)
• High-end Intel-based servers support 6-8TB per server
• Average deployed server memory is increasing on both mainframe and
distributed platforms
• Specific steps being taken to allow DB2 customers to exploit larger
memory footprints for OLTP workloads
• PGFIX(YES) in DB2 9
• PGSTEAL(NONE) and high-performance DBATs in DB2 10
• 1MB / 2GB page frames in DB2 10 / DB2 11
• Large (16MB) and Huge (16GB, AIX only) OS page support in DB2 for
LUW
18
19. #IDUG
In-Memory Computing – Today
• Analytics
• DB2 10.5 for LUW (AWSE & AESE) includes “BLU” technology - a collection of
novel technologies for optimising analytic queries,
including some specific in-memory techniques
• Columnar data store with patented dynamic
in-memory optimisation for data prefetch and
retention – “treats DRAM as disk”
• Data held in compressed format in memory, while
still allowing joins and predicate evaluation –
“actionable compression”
• Very impressive query performance across a wide
variety of analytic (and even some “heavy” OLTP)
workloads
• 10x – 25x elapsed time improvement is common
• Ability to more fully utilise all of the available
memory / CPU in a given server configuration
19
20. #IDUG
In-Memory Computing – Tomorrow
• Future zEnterprise machines likely to significantly increase maximum
memory capacity per CEC / LPAR
• Cost per GB likely to continue with general downward trend
• Average installed memory per CEC will continue to increase
• DB2 for z/OS may page-fix buffer pools by default
• More common customer use of large / huge page frames
• Page fixing and large page frame support for other DB2 storage areas
(e.g. EDM pool)
• Possible use of pageable 1MB page frames supported by zEC12
• Increased autonomic capability, reduction of memory-specific system
parameters
• DB2 BLU will continue to evolve
• Big push just starting on DB2 BLU in the cloud
20
21. #IDUG
Database Commoditisation – Today
• We’ve always lived in a heterogeneous world, but perception of
databases as a commodity is increasing
• Many reasons, including
• The ubiquity of SQL
• The rise of packaged solutions
• Java (JDBC, frameworks)
• RDBMS vendor compatibility / migration initiatives
• SOA
• Skills availability and support team size
• The result
• Lack of management awareness of business value of a specific
database
• Support teams and developers working with many database systems
• Lowest common denominator approach
21
22. #IDUG
Database Commoditisation – Today
• Fight back!
• Make it your mission to keep your management aware of
the unique business value of DB2
• If you have to be a Jack of all trades, at
least try to become a master of one
• Guess which one?
• Take pragmatic approach to lowest
common denominator issue
• Fight the battles worth winning
• Accept the rest
22
23. #IDUG
DB2 Skills – Today
• DB2 is getting more complex / capable in every release
• At the same time, IBM is trying to make it easier to use / understand
• Great until something needs fixing “under the hood”
• DB2 skills demographic is changing
• Source: My own observations only – no scientific backup!
23
Skill Level
%ofDB2Technicians
Skill Level
%ofDB2Technicians
24. #IDUG
DB2 Skills – Today
• Source of skills is changing dramatically too
24
Apprenticeship
Formal Courses
Conferences
DB2 L
Manuals & Redbooks
Apprenticeship
Formal Courses
Conferences
DB2 L
Manuals & Redbooks
YouTube, SlideShare, etc
Blogs, online articles
Magazines
Magazines
25. #IDUG
DB2 Skills – Tomorrow
• Jury still out on longer-term impact of greying mainframe
workforce
• IBM making efforts with its Academic Initiative
• Training provided for 80,000 students at over 1,000 schools in 70
countries during past 7 years
• 3 mainframe Massive Open On-line Courses
(MOOCs) will be made available in stages
throughout the year (no cost and
available to anyone, anywhere,
at any time)
• Expansion of DB2’s autonomic
capabilities will help, but requirement
for some deeper specialist skills
likely to continue for foreseeable
future
25
TaskComplexity
Autonomics
26. #IDUG
DB2 Skills – Tomorrow
SQLDBA
Permanent UK jobs requiring specific skills as proportion of total demand
Performance Tuning Big Data
27. #IDUG
Big Data – Today
• Big Data and Analytics are everywhere you look
• What’s a DB2 guy (or girl) to do?
• Things to keep in mind
• Hadoop is not a replacement for existing infrastructure, but a tool to
augment it
• Your role is still vital to your organisation!
• “90% of the world’s data is unstructured, but 90% of the world’s most
important data is structured”
David Barnes, IBM, 2012 IDUG Europe Keynote Speaker
• Database people have been doing big data and analytics for the past
40 years or so, just with different tools and terms (and capitalisation)
• If you have the right attitude / mind-set, a DBA background is an
excellent stepping stone to becoming a wealthy “Data Scientist”
27
28. #IDUG
Big Data – Today
• One of the secrets to DB2’s longevity is to “embrace and
extend” new technologies, and Big Data is no exception
• DB2 for z/OS
• IBM DB2 Analytics Accelerator for efficiently running complex
query workloads
• SQL extensions in most recent releases to improve query /
analytic workloads
• DB2 for LUW
• BLU Acceleration to dramatically speed up analytics and
reporting, by multiple orders of magnitude
• Part of DB2 for LUW V10.5 (included in AWSE and AESE)
• Remember that DB2 for LUW still holds Guinness World Record
for Largest Data Warehouse (3PB)
28
29. #IDUG
Big Data – Today
• Integration between DB2 and Hadoop opens new
possibilities for gaining actionable insight
29
30. #IDUG
Big Data – Tomorrow
• DB2 will continue with “embrace and extend” philosophy
• Efficient interaction with highly optimised big data platforms such as Hadoop /
BigInsights
• Further expand internal analytic / big data capabilities
• One size does NOT fit all !
• Each approach has strengths and
weaknesses, best one is dependent
on application requirements
• NoSQL = Not Only SQL (or YeSQL)
• Several NoSQL databases have
added SQL capabilities
• NoSQL for z/OS!
• Simple Key / value NoSQL database for z/OS, currently freeware
• http://www.nosqlz.com
30
31. #IDUG
Some Questions to Ponder
• What have you done recently to:
• Reduce the operational costs of the systems you support?
• Improve your personal productivity?
• Make the savings that you’ve made visible to the budget holders?
• Test your failover / disaster recovery arrangements?
• Review your housekeeping / maintenance / upgrade procedures to
ensure you’re maximising availability?
• Improve and expand your DB2 skills?
• Make management aware of the business value of DB2?
• Keep yourself relevant in a Big Data world?
• Prepare for the future?
31
“The future depends on what you do today”
Mahatma Ghandi
32. #IDUG
Where’s the future I was promised?
32
Portable fusion
reactor
Self-Tying
Laces
Hoverboard
Flying
cars
33. #IDUG
A Time Travellers Guide to DB2:
Technology Themes for 2014 and
beyond
Julian Stuhler
Principal Consultant
Triton Consulting