Mainframe departments in larger enterprises are challenged to keep up with rising data volumes from ever-changing applications running on the mainframe and even in hybrid cloud environments. There is continued pressure to squeeze more performance out of their existing mainframes to reduce costs and defer major upgrades. Obvious measures to get relief from these pressures revolve around the mundane processing tasks (sort, copy, merge, compression, report generation) that take up a large and disproportionate share of CPU processing time and related mainframe resources
That’s where Precisely’s Syncsort MFX has always delivered key benefits like optimized sorting, reduced CPU usage and faster job completion. Precisely has a 50 year history of innovation in this area and we continue to innovate our Syncsort MFX solution. We have several important enhancements coming for our customers. In addition, to highlighting this new functionality we want to share some powerful tips and tricks we have seen our customers use to improve their IBM z/System sorting.
Watch this on-demand webinar and learn:
• How Syncsort MFX can improve your sort performance and efficiency
• What enhancements are coming for Syncsort MFX customers
• Some tips and tricks for getting the most out of Syncsort MFX
Unlock Cost Savings for your IBM z Systems EnvironmentPrecisely
For many larger enterprises, the pressure to squeeze more performance out of existing mainframes, while avoiding or deferring major upgrades, is a top priority. Routine sort processing tasks such as sort, copy, merge, and compression take up a large and disproportionate share of CPU processing time and related mainframe resources. They are thus obvious areas to examine when seeking relief from mainframe cost and performance pressures.
When evaluating potential solutions many organizations want to “try before they buy” to ensure strong ROI. We have developed tools to help you understand your batch processing and calculate your expected ROI.
Watch this on-demand webinar to hear about:
• How our Syncsort MFX solution drive cost savings
• What a “try before you buy” evaluation looks like
• How to analyze results for your specific situation
Evaluate the Benefits of TFP for Your IBM Z SystemPrecisely
This document discusses how organizations can reduce CPU utilization and control costs on IBM Z systems when moving to Tailored Fit Pricing (TFP). TFP is a consumption-based licensing model that charges customers based on the exact MSUs consumed each month. The document recommends using Syncsort solutions like MFX and ZPSaver to optimize sort operations and offload work to zIIP engines. This can significantly reduce CPU utilization, control software costs, and potentially delay needed CPU upgrades in a TFP environment where all processing is charged. It also provides tips for analyzing workload trends and consumption drivers to prepare for the new pricing model.
One of the most CPU resource intensive, and highly used functions, within every IT environment is sorting. Virtually every application needs to sort data, as well as copy information from one location to another. Utilization of zIIP engines for sort, copy, and compression provides an organization with additional processing capacity and can limit the expansion of the 4-Hour Rolling Average of LPAR Hourly MSU Utilization (4HRA) which will prevent an in increase Monthly License Charges (MLC).
View this IBM Systems Magazine webcast to learn:
• How to assess what's driving peaks in your 4HRA
• 5 advantages of offloading workloads to zIIP engines
• Examples of Fortune 500 companies maximizing their mainframe through zIIP exploitation
New Mainframe Developments and How Syncsort Helps You Exploit ThemPrecisely
Mainframes have been the workhorses of the corporate data center for more than half a century. More than 70% of Fortune 500 enterprises continue to use mainframes for their most crucial business functions. Syncsort has been the leader in mainframe innovations since the early years. Recently, IBM made two major announcements for their mainframe customers.
IBM has recently made two major IBM z announcements: new Tailored Fit Pricing and the next-generation z15 mainframe. In this on-demand webinar we’ll help you understand the opportunities and challenges that the new pricing model and z15 systems present. We’ll also share exciting sorting technology innovations from Syncsort that drive cost savings with the new z15.
You will learn:
• How to evaluate the decision to move to Tailored Fit pricing or to stay on 4HRA pricing
• How Syncsort's Elevate MFSort (formerly MFX) reduces workloads on the new z15 mainframes
• How we plan to exploit the IBM’s new instructions for sort acceleration
Best Practices – Extreme Performance with Data Warehousing on Oracle DatabaseEdgar Alejandro Villegas
The document discusses best practices for data warehousing performance on Oracle Database. It covers using Oracle Exadata Database Machine for mixed workloads including data warehousing. Key strategies discussed are partitioning tables for pruning and parallelism, using hybrid columnar compression for storage savings and faster scans, and enabling auto parallelism and queuing for optimal parallel query processing.
Introducing MFX for z/OS 2.1 & ZPSaver SuitePrecisely
Syncsort MFX for z/OS 2.1 leverages the latest advancements in mainframe technology, including IBM’s zHPF architecture and deeper exploitation of zIIP engines, to help you overcome the challenges of modern mainframe systems.
This document discusses various ways to optimize FirebirdSQL performance, including hardware optimization like using SSD drives, RAID 10, and enabling write caches. It also discusses optimizing the Firebird server configuration by increasing cache sizes, enabling CPU affinity, and optimizing programming and SQL. Specific tips include using prepared statements, limiting record fetches, avoiding unnecessary indexes, and using analytical functions in Firebird 3.0. Overall, the document provides over 45 tips across hardware, software, programming, and SQL optimizations to improve FirebirdSQL performance.
Best Practices – Extreme Performance with Data Warehousing on Oracle Databa...Edgar Alejandro Villegas
The document discusses best practices for data warehousing performance on Oracle Database. It covers Oracle Exadata Database Machine capabilities like intelligent storage, hybrid columnar compression, and smart flash cache. It also discusses partitioning, parallelism, monitoring tools, and data loading techniques to maximize warehouse performance.
Unlock Cost Savings for your IBM z Systems EnvironmentPrecisely
For many larger enterprises, the pressure to squeeze more performance out of existing mainframes, while avoiding or deferring major upgrades, is a top priority. Routine sort processing tasks such as sort, copy, merge, and compression take up a large and disproportionate share of CPU processing time and related mainframe resources. They are thus obvious areas to examine when seeking relief from mainframe cost and performance pressures.
When evaluating potential solutions many organizations want to “try before they buy” to ensure strong ROI. We have developed tools to help you understand your batch processing and calculate your expected ROI.
Watch this on-demand webinar to hear about:
• How our Syncsort MFX solution drive cost savings
• What a “try before you buy” evaluation looks like
• How to analyze results for your specific situation
Evaluate the Benefits of TFP for Your IBM Z SystemPrecisely
This document discusses how organizations can reduce CPU utilization and control costs on IBM Z systems when moving to Tailored Fit Pricing (TFP). TFP is a consumption-based licensing model that charges customers based on the exact MSUs consumed each month. The document recommends using Syncsort solutions like MFX and ZPSaver to optimize sort operations and offload work to zIIP engines. This can significantly reduce CPU utilization, control software costs, and potentially delay needed CPU upgrades in a TFP environment where all processing is charged. It also provides tips for analyzing workload trends and consumption drivers to prepare for the new pricing model.
One of the most CPU resource intensive, and highly used functions, within every IT environment is sorting. Virtually every application needs to sort data, as well as copy information from one location to another. Utilization of zIIP engines for sort, copy, and compression provides an organization with additional processing capacity and can limit the expansion of the 4-Hour Rolling Average of LPAR Hourly MSU Utilization (4HRA) which will prevent an in increase Monthly License Charges (MLC).
View this IBM Systems Magazine webcast to learn:
• How to assess what's driving peaks in your 4HRA
• 5 advantages of offloading workloads to zIIP engines
• Examples of Fortune 500 companies maximizing their mainframe through zIIP exploitation
New Mainframe Developments and How Syncsort Helps You Exploit ThemPrecisely
Mainframes have been the workhorses of the corporate data center for more than half a century. More than 70% of Fortune 500 enterprises continue to use mainframes for their most crucial business functions. Syncsort has been the leader in mainframe innovations since the early years. Recently, IBM made two major announcements for their mainframe customers.
IBM has recently made two major IBM z announcements: new Tailored Fit Pricing and the next-generation z15 mainframe. In this on-demand webinar we’ll help you understand the opportunities and challenges that the new pricing model and z15 systems present. We’ll also share exciting sorting technology innovations from Syncsort that drive cost savings with the new z15.
You will learn:
• How to evaluate the decision to move to Tailored Fit pricing or to stay on 4HRA pricing
• How Syncsort's Elevate MFSort (formerly MFX) reduces workloads on the new z15 mainframes
• How we plan to exploit the IBM’s new instructions for sort acceleration
Best Practices – Extreme Performance with Data Warehousing on Oracle DatabaseEdgar Alejandro Villegas
The document discusses best practices for data warehousing performance on Oracle Database. It covers using Oracle Exadata Database Machine for mixed workloads including data warehousing. Key strategies discussed are partitioning tables for pruning and parallelism, using hybrid columnar compression for storage savings and faster scans, and enabling auto parallelism and queuing for optimal parallel query processing.
Introducing MFX for z/OS 2.1 & ZPSaver SuitePrecisely
Syncsort MFX for z/OS 2.1 leverages the latest advancements in mainframe technology, including IBM’s zHPF architecture and deeper exploitation of zIIP engines, to help you overcome the challenges of modern mainframe systems.
This document discusses various ways to optimize FirebirdSQL performance, including hardware optimization like using SSD drives, RAID 10, and enabling write caches. It also discusses optimizing the Firebird server configuration by increasing cache sizes, enabling CPU affinity, and optimizing programming and SQL. Specific tips include using prepared statements, limiting record fetches, avoiding unnecessary indexes, and using analytical functions in Firebird 3.0. Overall, the document provides over 45 tips across hardware, software, programming, and SQL optimizations to improve FirebirdSQL performance.
Best Practices – Extreme Performance with Data Warehousing on Oracle Databa...Edgar Alejandro Villegas
The document discusses best practices for data warehousing performance on Oracle Database. It covers Oracle Exadata Database Machine capabilities like intelligent storage, hybrid columnar compression, and smart flash cache. It also discusses partitioning, parallelism, monitoring tools, and data loading techniques to maximize warehouse performance.
The document provides an overview of microprocessors and microcontrollers. It discusses the basic architecture of microprocessors, including the Von Neumann and Harvard architectures. It compares RISC and CISC instruction sets. Microcontrollers are defined as single-chip computers containing a CPU, memory, and I/O ports. Common PIC microcontrollers are described along with their characteristics such as speed, memory types, and analog/digital capabilities. The document also outlines best practices for selecting a suitable microcontroller for a project, including identifying hardware interfaces, memory needs, programming tools, and cost/power constraints.
Cobol performance tuning paper lessons learned - s8833 trPedro Barros
The document summarizes updates made to IBM's COBOL Performance Tuning Paper, including:
1) New compiler options like BLOCK0 and XMLPARSE that can significantly impact performance, as well as runtime options like INTERRUPT.
2) Performance improvements to COBOL code generation that have occurred over many releases, allowing code to better exploit modern hardware.
3) Specific examples where performance optimizations like dynamic CALL under CICS provided orders of magnitude faster execution versus other approaches.
Dynamics CRM high volume systems - lessons from the fieldStéphane Dorrekens
Three field stories from companies describe their experiences with high volume CRM implementations: a financial institution with 8,000 users and 350GB of data across two implementations; a financial institution with 2,000 users, 2,500GB of data across two implementations; and a financial institution with 1,000 users and over 450GB of data across six implementations, with 50GB added per month for the largest one. The document discusses lessons learned from these implementations regarding infrastructure design, functional design, and performance testing to support high volume systems.
IMS04 BMC Software Strategy and RoadmapRobert Hain
The document discusses BMC's strategy and future plans for its IMS products. Recent enhancements included increased support for IMS versions 13.1 and 14, as well as new capabilities for database monitoring, availability, performance optimization, and workload management. Future plans involve merging features across IMS product offerings, incorporating capabilities from acquired Neon products, and supporting new IMS releases. BMC is committed to innovation, exploiting new technologies, and investing in talent to simplify DBMS administration and optimize costs, availability and performance for customers.
How to Improve RACF Performance (v0.2 - 2016)Rui Miguel Feio
When hundreds and some times thousands of security validations occur every minute on the mainframe, performance and availability are paramount. In this session the presenter shows some different techniques that when implemented can help improve RACF performance, so that it does not become the source of your performance problems.
The document describes Oracle's new SPARC T4 servers, which provide up to 5x better single-threaded performance than previous SPARC servers. The SPARC T4 servers are optimized for Oracle software like the Oracle Database and WebLogic Suite. They include integrated security features like encryption without performance penalties. The document provides an overview of the SPARC T4 processor architecture and performance advantages, and describes how the new servers are optimized solutions for running Oracle applications.
Session 6638 - The One-Day CICS Transaction Server Upgrade: Migration Conside...nick_garrod
InterConnect 2015 Session 6638 The One-Day CICS Transaction Server Upgrade: Migration Considerations. After months of writing letters of justification you‘re finally OK to order CICS TS. The software has arrived and is ready to install. What next? Do you have to buy lunch for the z/OS Sys Progs to get them to install the latest release of z/OS? What do you need from the database administrators? Is this the time to “byte” the bullet and use CICSPlex Systems Manager? Are your CICS Exits and User Replaceable Modules still valid? What will it take to get them working with this release of CICS? What will you have to tell the Application Programmers? Is your shop relying on any function that has been removed from this release of CICS? Come to this technical session to get the answers to the above questions and more.
The document discusses strategies for improving application performance on POWER9 processors using IBM XL and open source compilers. It reviews key POWER9 features and outlines common bottlenecks like branches, register spills, and memory issues. It provides guidelines on using compiler options and coding practices to address these bottlenecks, such as unrolling loops, inlining functions, and prefetching data. Tools like perf are also described for analyzing performance bottlenecks.
Maaz Anjum - IOUG Collaborate 2013 - An Insight into Space Realization on ODA...Maaz Anjum
The document provides an overview of Maaz Anjum, a solutions architect specializing in Oracle products like OEM12c, Golden Gate, and Engineered Systems. It lists his email, blog, and experience using Oracle products since 2001. It also provides details about Bias Corporation, the company he works for, including its founding date, certifications, expertise, customers, and implementations.
Approximation techniques used for general purpose algorithmsSabidur Rahman
Survey on approximation techniques used for general purpose algorithms, data parallel applications ans solid-state memories. It is interesting to see how approximation algorithms can contribute to solve real-life problems with better efficiency and lower cost!
Questions? krahman@ucdavis.edu.
Ashnik EnterpriseDB PostgreSQL - A real alternative to Oracle Ashnikbiz
A Technical introduction to PostgreSQL and Postgres Plus -
Enterprise Class PostgreSQL Database from EDB - You have a ‘Real’ alternative to Oracle and other conventional proprietary Databases
PostgreSQL 10 will include several new features and improvements, including logical replication which allows replicating specific tables, quorum-based synchronous replication, improved partitioning support, pushing more computations to foreign databases with FDWs, expanded parallel query capabilities, techniques to reduce write amplification, indirect indexes, executor and statistics overhauls, and easier backup/replication configuration defaults. Many of these features are still in development.
Maximizing the Value of IBM's New Mainframe Pricing Model with Syncsort Elevate Precisely
IBM’s new Tailored Fit Pricing is a generational shift in IBM Z software pricing, offering an alternative to the Rolling 4-Hour Average model introduced in 1999. Designed to eliminate the challenges of forecasting demand for hybrid cloud environments and dynamic workloads, this new pricing model can make your mainframe licensing costs more predictable and manageable.
Our Elevate MFSort and Elevate ZPSaver products, combined with the new Tailored Fit Pricing offer the opportunity to achieve even more efficiency and cost savings through high-performance sort and the ability to offload expensive processing to zIIP engines.
Learn about:
The two new Tailored Fit Pricing options from IBM
How Elevate MFSort and Elevate ZPSaver cut down workloads
How Syncsort can help you project the savings you may achieve
Data Virtualization Reference Architectures: Correctly Architecting your Solu...Denodo
Correctly Architecting your Solutions for Analytical & Operational Uses reviews the two main types of use cases that can be solved with the Denodo Platform. Both high concurrency scenarios and big reporting use cases are discussed in this presentation in a comparative way, explaining the different approaches that you must take to be successful in any situation.
This presentation is part of the Fast Data Strategy Conference, and you can watch the video here goo.gl/wdZgpo.
Presentation db2 best practices for optimal performancesolarisyougood
This document summarizes best practices for optimizing DB2 performance on various platforms. It discusses sizing workloads based on factors like concurrent users and response time objectives. Guidelines are provided for selecting CPUs, memory, disks and platforms. The document reviews physical database design best practices like choosing a page size and tablespace design. It also discusses index design, compression techniques, and benchmark results showing DB2's high performance.
Reduced instruction set computing, or RISC (pronounced 'risk', /ɹɪsk/), is a CPU design strategy based on the insight that a simplified instruction set provides higher performance when combined with a microprocessor architecture capable of executing those instructions using fewer microprocessor cycles per instruction.
AWS re:Invent 2016| DAT318 | Migrating from RDBMS to NoSQL: How Sony Moved fr...Amazon Web Services
In this session, you will learn the key differences between a relational database management service (RDBMS) and non-relational (NoSQL) databases like Amazon DynamoDB. You will learn about suitable and unsuitable use cases for NoSQL databases. You'll learn strategies for migrating from an RDBMS to DynamoDB through a 5-phase, iterative approach. See how Sony migrated an on-premises MySQL database to the cloud with Amazon DynamoDB, and see the results of this migration.
The document discusses operating systems and their functions. It covers:
1) The OS acts as an interface between the user and hardware, allocating and managing resources like memory, CPU time, and I/O devices.
2) The OS supports application software, loads programs into memory, and transforms raw hardware into a usable machine.
3) Key functions of the OS include scheduling processes, managing memory, handling I/O, and allocating resources like the CPU. The OS allows for time-sharing of resources between multiple users and programs.
The document discusses Oracle's Zero Data Loss Recovery Appliance. It aims to fundamentally change how databases are protected by pushing database changes in real-time instead of periodic backups. This minimizes impact on production databases and ensures zero data loss. It stores database changes efficiently on disk and can restore databases to any point in time using these deltas. It also creates space-efficient "virtual" full backups without requiring full backups. This enables long retention of backup history with minimal storage.
The concept of InfiniFlux:the ultra high-speed database that stores and processes time series data.
InfiniFlux has very different characteristics compared to the conventional DBMS such as Oracle and DB2 in order to provide high-speed processing.
Digital Banking in the Cloud: How Citizens Bank Unlocked Their MainframePrecisely
Inconsistent user experience and siloed data, high costs, and changing customer expectations – Citizens Bank was experiencing these challenges while it was attempting to deliver a superior digital banking experience for its clients. Its core banking applications run on the mainframe and Citizens was using legacy utilities to get the critical mainframe data to feed customer-facing channels, like call centers, web, and mobile. Ultimately, this led to higher operating costs (MIPS), delayed response times, and longer time to market.
Ever-changing customer expectations demand more modern digital experiences, and the bank needed to find a solution that could provide real-time data to its customer channels with low latency and operating costs. Join this session to learn how Citizens is leveraging Precisely to replicate mainframe data to its customer channels and deliver on their “modern digital bank” experiences.
The document provides an overview of microprocessors and microcontrollers. It discusses the basic architecture of microprocessors, including the Von Neumann and Harvard architectures. It compares RISC and CISC instruction sets. Microcontrollers are defined as single-chip computers containing a CPU, memory, and I/O ports. Common PIC microcontrollers are described along with their characteristics such as speed, memory types, and analog/digital capabilities. The document also outlines best practices for selecting a suitable microcontroller for a project, including identifying hardware interfaces, memory needs, programming tools, and cost/power constraints.
Cobol performance tuning paper lessons learned - s8833 trPedro Barros
The document summarizes updates made to IBM's COBOL Performance Tuning Paper, including:
1) New compiler options like BLOCK0 and XMLPARSE that can significantly impact performance, as well as runtime options like INTERRUPT.
2) Performance improvements to COBOL code generation that have occurred over many releases, allowing code to better exploit modern hardware.
3) Specific examples where performance optimizations like dynamic CALL under CICS provided orders of magnitude faster execution versus other approaches.
Dynamics CRM high volume systems - lessons from the fieldStéphane Dorrekens
Three field stories from companies describe their experiences with high volume CRM implementations: a financial institution with 8,000 users and 350GB of data across two implementations; a financial institution with 2,000 users, 2,500GB of data across two implementations; and a financial institution with 1,000 users and over 450GB of data across six implementations, with 50GB added per month for the largest one. The document discusses lessons learned from these implementations regarding infrastructure design, functional design, and performance testing to support high volume systems.
IMS04 BMC Software Strategy and RoadmapRobert Hain
The document discusses BMC's strategy and future plans for its IMS products. Recent enhancements included increased support for IMS versions 13.1 and 14, as well as new capabilities for database monitoring, availability, performance optimization, and workload management. Future plans involve merging features across IMS product offerings, incorporating capabilities from acquired Neon products, and supporting new IMS releases. BMC is committed to innovation, exploiting new technologies, and investing in talent to simplify DBMS administration and optimize costs, availability and performance for customers.
How to Improve RACF Performance (v0.2 - 2016)Rui Miguel Feio
When hundreds and some times thousands of security validations occur every minute on the mainframe, performance and availability are paramount. In this session the presenter shows some different techniques that when implemented can help improve RACF performance, so that it does not become the source of your performance problems.
The document describes Oracle's new SPARC T4 servers, which provide up to 5x better single-threaded performance than previous SPARC servers. The SPARC T4 servers are optimized for Oracle software like the Oracle Database and WebLogic Suite. They include integrated security features like encryption without performance penalties. The document provides an overview of the SPARC T4 processor architecture and performance advantages, and describes how the new servers are optimized solutions for running Oracle applications.
Session 6638 - The One-Day CICS Transaction Server Upgrade: Migration Conside...nick_garrod
InterConnect 2015 Session 6638 The One-Day CICS Transaction Server Upgrade: Migration Considerations. After months of writing letters of justification you‘re finally OK to order CICS TS. The software has arrived and is ready to install. What next? Do you have to buy lunch for the z/OS Sys Progs to get them to install the latest release of z/OS? What do you need from the database administrators? Is this the time to “byte” the bullet and use CICSPlex Systems Manager? Are your CICS Exits and User Replaceable Modules still valid? What will it take to get them working with this release of CICS? What will you have to tell the Application Programmers? Is your shop relying on any function that has been removed from this release of CICS? Come to this technical session to get the answers to the above questions and more.
The document discusses strategies for improving application performance on POWER9 processors using IBM XL and open source compilers. It reviews key POWER9 features and outlines common bottlenecks like branches, register spills, and memory issues. It provides guidelines on using compiler options and coding practices to address these bottlenecks, such as unrolling loops, inlining functions, and prefetching data. Tools like perf are also described for analyzing performance bottlenecks.
Maaz Anjum - IOUG Collaborate 2013 - An Insight into Space Realization on ODA...Maaz Anjum
The document provides an overview of Maaz Anjum, a solutions architect specializing in Oracle products like OEM12c, Golden Gate, and Engineered Systems. It lists his email, blog, and experience using Oracle products since 2001. It also provides details about Bias Corporation, the company he works for, including its founding date, certifications, expertise, customers, and implementations.
Approximation techniques used for general purpose algorithmsSabidur Rahman
Survey on approximation techniques used for general purpose algorithms, data parallel applications ans solid-state memories. It is interesting to see how approximation algorithms can contribute to solve real-life problems with better efficiency and lower cost!
Questions? krahman@ucdavis.edu.
Ashnik EnterpriseDB PostgreSQL - A real alternative to Oracle Ashnikbiz
A Technical introduction to PostgreSQL and Postgres Plus -
Enterprise Class PostgreSQL Database from EDB - You have a ‘Real’ alternative to Oracle and other conventional proprietary Databases
PostgreSQL 10 will include several new features and improvements, including logical replication which allows replicating specific tables, quorum-based synchronous replication, improved partitioning support, pushing more computations to foreign databases with FDWs, expanded parallel query capabilities, techniques to reduce write amplification, indirect indexes, executor and statistics overhauls, and easier backup/replication configuration defaults. Many of these features are still in development.
Maximizing the Value of IBM's New Mainframe Pricing Model with Syncsort Elevate Precisely
IBM’s new Tailored Fit Pricing is a generational shift in IBM Z software pricing, offering an alternative to the Rolling 4-Hour Average model introduced in 1999. Designed to eliminate the challenges of forecasting demand for hybrid cloud environments and dynamic workloads, this new pricing model can make your mainframe licensing costs more predictable and manageable.
Our Elevate MFSort and Elevate ZPSaver products, combined with the new Tailored Fit Pricing offer the opportunity to achieve even more efficiency and cost savings through high-performance sort and the ability to offload expensive processing to zIIP engines.
Learn about:
The two new Tailored Fit Pricing options from IBM
How Elevate MFSort and Elevate ZPSaver cut down workloads
How Syncsort can help you project the savings you may achieve
Data Virtualization Reference Architectures: Correctly Architecting your Solu...Denodo
Correctly Architecting your Solutions for Analytical & Operational Uses reviews the two main types of use cases that can be solved with the Denodo Platform. Both high concurrency scenarios and big reporting use cases are discussed in this presentation in a comparative way, explaining the different approaches that you must take to be successful in any situation.
This presentation is part of the Fast Data Strategy Conference, and you can watch the video here goo.gl/wdZgpo.
Presentation db2 best practices for optimal performancesolarisyougood
This document summarizes best practices for optimizing DB2 performance on various platforms. It discusses sizing workloads based on factors like concurrent users and response time objectives. Guidelines are provided for selecting CPUs, memory, disks and platforms. The document reviews physical database design best practices like choosing a page size and tablespace design. It also discusses index design, compression techniques, and benchmark results showing DB2's high performance.
Reduced instruction set computing, or RISC (pronounced 'risk', /ɹɪsk/), is a CPU design strategy based on the insight that a simplified instruction set provides higher performance when combined with a microprocessor architecture capable of executing those instructions using fewer microprocessor cycles per instruction.
AWS re:Invent 2016| DAT318 | Migrating from RDBMS to NoSQL: How Sony Moved fr...Amazon Web Services
In this session, you will learn the key differences between a relational database management service (RDBMS) and non-relational (NoSQL) databases like Amazon DynamoDB. You will learn about suitable and unsuitable use cases for NoSQL databases. You'll learn strategies for migrating from an RDBMS to DynamoDB through a 5-phase, iterative approach. See how Sony migrated an on-premises MySQL database to the cloud with Amazon DynamoDB, and see the results of this migration.
The document discusses operating systems and their functions. It covers:
1) The OS acts as an interface between the user and hardware, allocating and managing resources like memory, CPU time, and I/O devices.
2) The OS supports application software, loads programs into memory, and transforms raw hardware into a usable machine.
3) Key functions of the OS include scheduling processes, managing memory, handling I/O, and allocating resources like the CPU. The OS allows for time-sharing of resources between multiple users and programs.
The document discusses Oracle's Zero Data Loss Recovery Appliance. It aims to fundamentally change how databases are protected by pushing database changes in real-time instead of periodic backups. This minimizes impact on production databases and ensures zero data loss. It stores database changes efficiently on disk and can restore databases to any point in time using these deltas. It also creates space-efficient "virtual" full backups without requiring full backups. This enables long retention of backup history with minimal storage.
The concept of InfiniFlux:the ultra high-speed database that stores and processes time series data.
InfiniFlux has very different characteristics compared to the conventional DBMS such as Oracle and DB2 in order to provide high-speed processing.
Similar to Continued Innovation in IBM z/System Sort Optimization with Syncsort MFX (20)
Digital Banking in the Cloud: How Citizens Bank Unlocked Their MainframePrecisely
Inconsistent user experience and siloed data, high costs, and changing customer expectations – Citizens Bank was experiencing these challenges while it was attempting to deliver a superior digital banking experience for its clients. Its core banking applications run on the mainframe and Citizens was using legacy utilities to get the critical mainframe data to feed customer-facing channels, like call centers, web, and mobile. Ultimately, this led to higher operating costs (MIPS), delayed response times, and longer time to market.
Ever-changing customer expectations demand more modern digital experiences, and the bank needed to find a solution that could provide real-time data to its customer channels with low latency and operating costs. Join this session to learn how Citizens is leveraging Precisely to replicate mainframe data to its customer channels and deliver on their “modern digital bank” experiences.
AI-Ready Data - The Key to Transforming Projects into Production.pptxPrecisely
Moving AI projects from the laboratory to production requires careful consideration of data preparation. Join us for a fireside chat where industry experts, including Antonio Cotroneo (Director, Product Marketing, Precisely) and Sanjeev Mohan (Principal, SanjMo), will discuss the crucial role of AI-ready data in achieving success in AI projects. Gain essential insights and considerations to ensure your AI solutions are built on a solid foundation of accurate, consistent, and context-rich data. Explore practical insights and learn how data integrity drives innovation and competitive advantage. Transform your approach to AI with a focus on data readiness.
Building a Multi-Layered Defense for Your IBM i SecurityPrecisely
In today's challenging security environment, new vulnerabilities emerge daily, leaving even patched systems exposed. While IBM works tirelessly to release fixes as they discover vulnerabilities, bad actors are constantly innovating. Don't settle for reactive defense – secure your IT with a layered approach!
This holistic strategy builds multiple security walls, making it far harder for attackers to breach your defenses. Even if a certain vulnerability is exploited, one of the controls could stop the attack or at least delay it until you can take action.
Join us for this webcast to hear about:
• How security risks continue to evolve and change
• The importance of keeping all your systems patched an up-to-date
• A multi-layered approach to network, system object and data security
Navigating the Cloud: Best Practices for Successful MigrationPrecisely
In today's digital landscape, migrating workloads and applications to the cloud has become imperative for businesses seeking scalability, flexibility, and efficiency. However, executing a seamless transition requires strategic planning and careful execution. Join us as we delve into the insightful insights around cloud migration, where we will explore three key topics:
i. Considerations to take when planning for cloud migration
ii. Best practices for successfully migrating to the cloud
iii. Real-world customer stories
Unlocking the Power of Your IBM i and Z Security Data with Google ChroniclePrecisely
In today's ever-evolving threat landscape, any siloed systems, or data leave organizations vulnerable. This is especially true when mission-critical systems like IBM i and IBM Z mainframes are not included in your security planning. Valuable security data from these systems often remains isolated, hindering your ability to detect and respond to threats effectively.
Ironstream and bridge this gap for IBM systems by integrating the important security data from these mission-critical systems into Google Chronicle where it can be seen, analyzed and correlated with the data from other enterprise systems Here's what you'll learn:
• The unique challenges of securing IBM i and Z mainframes
• Why traditional security tools fall short for mainframe data
• The power of Google Chronicle for unified security intelligence
• How to gain comprehensive visibility into your entire IT ecosystem
• Real-world use cases for integrating IBM i and Z security data with Google Chronicle
Join us for this webcast to hear about:
• The unique challenges of securing IBM i and IBM Z systems
• Real-world use cases for integrating IBM i and IBM Z security data with Google Chronicle
• Combining Ironstream and Google Chronicle to deliver faster threat detection, investigation, and response times
Unlocking the Potential of the Cloud for IBM Power SystemsPrecisely
Are you considering leveraging the cloud alongside your existing IBM AIX and IBM I systems infrastructure? There are likely benefits to be realized in scalability, flexibility and even cost.
However, to realize these benefits, you need to be aware of the challenges and opportunities that come with integrating your IBM Power Systems in the cloud. These challenges range from data synchronization to testing to planning for fallback in the event of problems.
Join us for this webcast to hear about:
• Seamless migration strategies
• Best practices for operating in the cloud
• Benefits of cloud-based HA/DR for IBM AIX and IBM i
Crucial Considerations for AI-ready Data.pdfPrecisely
This document discusses the importance of ensuring data is ready for AI applications. It notes that while most businesses invest in AI, only 4% of organizations say their data is truly AI-ready. It identifies several issues that can arise from using bad data for AI, including bias, poor performance, and inaccurate predictions. The document advocates for establishing strong data governance, quality practices, and integration capabilities to address issues like completeness, validity, and bias. It provides examples of how two companies leveraged these approaches to enhance their AI and machine learning models. The document emphasizes that achieving trusted AI requires a focus on data integrity throughout the data journey from generation to activation.
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdfPrecisely
This document discusses how to empower businesses through worry-free data processing. Key steps include collecting and organizing relevant business data, developing efficient processes for analyzing and interpreting the data, and using insights from the data to help businesses make better decisions and improve their operations in a sustainable way over time.
It can be challenging display and share capacity data that is meaningful to end users. There is an overabundance of data points related to capacity, and the summarization of this data is difficult to construct and display.
You are already spending time and money to handle the critical need to manage systems capacity, performance and estimate future needs. Are you it spending wisely? Are you getting the level of results from your investment that you really need? Can you prove it?
The good news is that the return on investment of implementing capacity management and capacity planning is most definitely positive and provable, both in terms of tangible monetary value and in some less tangible but no-less-valuable benefits.
Join us for this webinar and learn:
• Top Trends in Capacity Management
• Common customer pain points
• Ways to demonstrate these benefits to your company
Automate Studio Training: Materials Maintenance Tips for Efficiency and Ease ...Precisely
Ready to improve efficiency, provide easy to use data automations and take materials master (MM) data maintenance to the next level?
Find out how during our Automate Studio training on March 28 – led by Sigrid Kok, Principal Sales Engineer, and Isra Azam, Sales Engineer, at Precisely.
This session’s for you if you want to discover the best approaches for creating, extending or maintaining different types of materials, as well as automating the tricky parts of these processes that slow you down.
Greater control over your Automate Studio business processes means bigger, better results. We’ll show you how to enable your business users to interact with SAP from Microsoft Office and other familiar platforms – resulting in more efficient SAP data management, along with improved data integrity and accuracy.
This 90-minute session will be filled with a variety of topics, including:
real world approaches for creating multiple types of materials, balancing flexibility and power with simplicity and ease of use
tips on material creation, including
downloading the generated material number
using formulas to format prior to upload, such as capitalization or zero padding to make it easy to get the data right the first time
conditionally require fields based on other field entries
using LOV for fields that are free form entry for standard values
tips on modifying alternate units of measure, building from scratch using GUI scripting
modify multiple language descriptions, build from scratch using a standard BAPI
make end-to-end MM process flows more of a reality with features including APIs and predictive AI
Through these topics, you’ll gain plenty of actionable takeaways that you can start implementing right away – including how to:
improve your data integrity and accuracy
make scripts flexible and usable for automation users
seamlessly handle both simple and complex parts of material master
interact with SAP from both business user and script developers’ perspectives
easily upload and download data between SAP and Excel – and how to format the data before upload using simple formulas
You’ll leave this session feeling ready and empowered to save time, boost efficiency, and change the way you work.
Automate Studio reduces your dependency on technical resources to help you create automation scenarios – and our team of experts is here to make sure you get the most out of our solution throughout the journey.
Questions? Sigrid & Isra will be ready to answer them during a live Q&A at the end of the session.
Who should attend:
Attendees who will get the most out of this session are Automate Studio developers and runners familiar with SAP MM. Knowledge of Automate Studio script creation is nice to have, but not required.
Leveraging Mainframe Data in Near Real Time to Unleash Innovation With Cloud:...Precisely
Join us for an insightful roundtable discussion featuring experts from AWS, Confluent, and Precisely as they delve into the complexities and opportunities of migrating mainframe data to the cloud.
In this engaging webinar, participants will learn about the various considerations, strategies, and customer challenges associated with replicating mainframe data to cloud environments.
Our panelists will share practical insights, real-world experiences, and best practices to help organizations successfully navigate this transformative journey.
Whether you're considering migrating and modernizing your mainframe applications to cloud, or augmenting mainframe-based applications with data replication to cloud, this roundtable will provide valuable perspectives and insights to maximize the benefits of migrating mainframe data to the cloud.
Join us on March 27 to gain a deeper understanding of the opportunities and challenges in this evolving landscape.
Data Innovation Summit: Data Integrity TrendsPrecisely
Data integrity remains an evolving process of discovery, identification, and resolution. With an all-time low in public confidence on data being used for decision-making, attention has gradually shifted to data quality and data integration across multiple systems and frameworks. Data integrity becomes a focal point again for companies to make strategic moves in a world facing an evolving economy.
Key takeaways:
· How to build a data-driven culture within your organization
· Tips to engage with key stakeholders in your business and examples from other businesses around the world
· How to establish and maintain a business-first approach to data governance
· A summary of the findings from a recent survey of global data executives by Drexel University's LeBow College of Business
AI You Can Trust - Ensuring Success with Data Integrity WebinarPrecisely
Artificial Intelligence (AI) has become a strategic imperative in a rapidly evolving business landscape. However, the rush to embrace AI comes with risks, as illustrated by instances of AI-generated content with fake citations and potentially dangerous recommendations. The critical factor underpinning trustworthy AI is data integrity, ensuring data is accurate, consistent, and full of rich context.
Attend our upcoming webinar, "AI You Can Trust: Ensuring Success with Data Integrity," as we explore organizational challenges in maintaining data integrity for AI applications and real-world use cases showcasing the transformative impact of high-integrity data on AI success.
During this panel discussion, we'll highlight everything from personalized recommendations and AI-powered workflows to machine learning applications and innovative AI assistants.
Key Topics:
AI Use Cases with Data Integrity: Discover how data integrity shapes the success of AI applications through six compelling use cases.
Solving AI Challenges: Uncover practical solutions to common AI challenges such as bias, unreliable results, lack of contextual relevance, and inadequate data security.
Three Considerations of Data Integrity for AI: Learn the essential pillars—complete, trusted, and contextual—that underpin data integrity for AI success.
Precisely and AWS Partnership: Explore how the collaboration between Precisely and Amazon Web Services (AWS) addresses these challenges and empowers organizations to achieve AI-ready data.
Join our panelists to unlock the full potential of AI by starting your data integrity journey today. Trust in AI begins with trusted data – let's future-proof your AI together.
Less Bias. More Accurate. Relevant Outcomes.
Building RAG with self-deployed Milvus vector database and Snowpark Container...Zilliz
This talk will give hands-on advice on building RAG applications with an open-source Milvus database deployed as a docker container. We will also introduce the integration of Milvus with Snowpark Container Services.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
20 Comprehensive Checklist of Designing and Developing a WebsitePixlogix Infotech
Dive into the world of Website Designing and Developing with Pixlogix! Looking to create a stunning online presence? Look no further! Our comprehensive checklist covers everything you need to know to craft a website that stands out. From user-friendly design to seamless functionality, we've got you covered. Don't miss out on this invaluable resource! Check out our checklist now at Pixlogix and start your journey towards a captivating online presence today.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!
Continued Innovation in IBM z/System Sort Optimization with Syncsort MFX
1. Continued Innovation
in IBM z/System Sort
Optimization with
Syncsort MFX
Denise Tabor | Senior Product Manager Director
Alissa Margulies | Principal Sales Engineer
2. Today’s Agenda
• Syncsort MFX overview
• Tips for Syncsort MFX optimization
• Important enhancements for Syncsort MFX
customers
2
4. IBM® z Integrated Information
Processor (zIIP):
• Helps improve GP utilization
• Allows customers to purchase additional processing
power
• Removes IBM software charges on zIIP capacity
Bottom line:
• Pay once, use often, without any additional cost
• Eligible workloads moved to zIIP reduce license
costs
• HOWEVER, MOST workloads are not enabled to
run on zIIP
4
IBM zIIP Engines
5. Syncsort MFX
• Exploits z/OS hardware and software AND zIIP
• Reduces CPU utilization for sort operations
• Optimizes I/O activity
Delivers
• Improved processor and DASD capacity
• Reduced cost
5
6. Syncsort MFX
Reduces CPU time
and I/O activity
Syncsort ZPSaver
zIIP-enabled sort
operations
Syncsort MFX
PipeSort
Reduces elapsed time
6
7. Syncsort MFX
Almost 50 years of continual
development and enhancements
Performance
The high-performance
sort/copy/join solution
that delivers better
performance and
saves money
Proven Solution
Improves sort performance while
optimizing overall system efficiency
zIIP Offload
Sort workloads can be directed to
the zIIP, thereby lowering the CPU
time and costs
Encryption
Enhanced security and compliance
with regulations such as GDPR
7
10. Tuning Objectives for Syncsort
MFX
10
• Tuning sort requires evaluation of your current applications and defining
your objectives
• Consider what you are willing to trade in order to get better performance.
• What is the primary outcome/goal?
• Reduced CPU?
• Reduced Elapsed time?
• Avoiding contention with other workloads?
• Reducing DASD contention?
• Are you willing to:
• Trade CPU for elapsed time improvements and visa versa?
• Reschedule your sort jobs?
• Move data sets to isolate DASD to be used by the sort?
• Pass run-time parameters to the sort?
• Change priorities of applications?
11. Tuning Options and Recommendation
11
Control Statements
• Ensure you are using the optimal
control statements for an
application.
• Evaluate errors in control
statements that can adversely
affect performance
Optimization Mode
• Select the proper mode for
the desired outcome
• Mode selection may affect
other areas of performance.
• Performance outcome
between modes may need
some experimentation.
Virtual Storage
• Most critical resource
in determining how well the
sort will run.
• One of the most dangerous
• Using too much can lead to
system storage shortages and
system outages.
12. Tuning Options and Recommendation
12
Rescheduling Work
• Evaluate your overall concurrent
workload
• Is your CPU capacity, real
storage and I/O resources
impacted?
• Determine if your sort work needs
to be rescheduled to a quieter
period.
PARASORT
• PARASORT improves the elapsed
time performance for sorts whose
input is a multi-volume tape data set
and/or concatenated tape data set.
• Uses parallel processing of the
SORTIN input volumes.
• Results in up to 33% reduction in
elapsed time
FILESIZE Estimates
• Use of the FILSZ
parameter provides the sort with
an estimate of the amount
of data to be sorted
• Can significantly improve
the optimization
and performance of very
large sorts.
13. Syncsort MFX Data Manipulation
Reformatting Records
• INREC, OUTREC and OUTFIL OUTREC
control statements.
• Used when information
in the input record is not required
by the applications, or the data
needs to be a different format.
• Allows you to:
• Delete or repeat segments of record
• Insert new field
• Convert data
• Perform arithmetic operations with
numeric fields and/or constants
• Perform MIN/MAX functions on
numeric data
• Change RECFM of output data set
from fixed to variable or the reverse
13
Record Selection
• INPUT PHASE: Selection of records
in the order in which they appear in
the input data set.
• OUTPUT PHASE: Selection of
records seen in sorted sequence.
• These selections can be specified:
• Skip the first “n” number of records
• Stop after processing “n” number
of records
• Include/omit records based
on comparisons of the contents of
one or more fields within the record
• Include a sample of “m”
records after an interval of every “n”
records
• Distribute the records in
rotation among all of the files in an
OUTFIL group
• Create a file that contains
only those records that were
not included in any other OUTFIL
14. Syncsort MFX Data Manipulation
14
Summation
• Special processing done on records
with equal sort keys.
• Detailed records are replaced with
a summary record, containing sum,
average, maximum or minimum
values
• Detail records can be written to a
separate data set
Report Writer
• Generate ad-hoc or
scheduled reports.
• Easy to use functionality with powerful
record selection and formatting
Join Records
• Records created by joining 2 files
that contain a common join key.
• Join processing produces 3 types
of records:
• Paired records,
• Unpaired from the first file
• Unpaired from the second file
15. Additional
Syncsort MFX
product
options
Syncsort
PipeSort
Can simultaneously execute up to
eight differently sequenced sorts
from a single pass of the input data
Syncsort
PROCSort
High performance, transparent
replacement for the SAS®-provided
PROC SORT.
Presentation name
15
Syncsort
ZPSaver
A set of enhanced technologies to
offload copy, SMS compression, and
sort processing to zIIP processors
17. IBM Z Sort Accelerator Support
(delivered)
17
• IBM’s Integrated Accelerator for Z Sort
• New coprocessor designed for the z15
• Accelerates internal sorts
• Precisely partnership w/IBM
• Worked with HW architects
• Developed new algorithms in Syncsort MFX
• Results
• Sort performance improvements
18. IBM Z Pervasive Encryption Support
(delivered)
18
• The IBM Z Pervasive Encryption enables
• Powerful encryption of data in-flight and at-rest
• Highly secure ways to help deal with today’s compliance and
regulatory requirements
• Requires additional resources
• Consumes processor cycles
• Forces Syncsort MFX to use less performant I/O methods
(BSAM)
• Solution
• For Basic/Large datasets, we can continue to use our low-level
I/O
19. MFX Operational Visibility
19
• Customers need more insights into their sort
workloads
• Unexpected delays
• Diagnostic information is frequently needed
• Rerun a job and just to gather “debug” information
• Provide more visibility
• Expedite/streamline/improve troubleshooting for large/important
jobs.
• Examine both real-time and historical data
• Port the data to an analytical tool for analysis
IBM® z Integrated Information Processor (zIIP):
A purpose-built processor designed to operate asynchronously with the general processors in the mainframe to help improve utilization of computing capacity and control costs.
zIIPs allow customers to purchase additional processing power without affecting the total million service units (MSU) rating
IBM does not impose IBM software charges on zIIP capacity, but charges apply when additional general purpose engines capacity is used
Bottom line:
Once zIIPs are purchased they are used without any additional cost
Moving eligible workloads to zIIP reduces general purpose engine utilization and can save SW license costs
The big challenge is that most workloads are not enabled to run on zIIP
Offers a high-performance sort, copy, and join utility designed to exploit the advanced facilities of the z/OS operating system and IBM Z, including zIIP engines.
Can significantly reduce CPU utilization for sort operations and optimize I/O activity to reduce contention to control software costs and delay CPU upgrades
Frees up general purpose MIPS for handling increased data volumes and new workloads
Drive cost reduction strategies by delaying CPU upgrades and reducing software charges.
Sort operations are resource intensive
Syncsort MFX uses less CPU time than DFSORT
Syncsort MFX optimizes I/O better than DFSORT
Syncsort MFX encrypts sort work for required security and compliance
Faster execution means more work can execute in the same amount of time
===
Significantly reduces general processor operation
Makes use of under-utilized zIIP engines
Helps control MLC costs and delay/prevent CPU upgrades
Allows sort work encryption to be completed on zIIP, reducing CPU utilization dramatically and reducing costs
===
Simultaneously executes up to eight differently sequenced sorts from a single pass of the input data
Uses advanced parallel sorting technology
Cuts total elapsed time by more than 50% compared to running separate sorts
In certain circumstances, sort may be unable to get a good filesize estimate, which may be an issue when the amount of data to be sorted is over 1 gigabyte.
PARASORT requires additional tape drives and will automatically manage the tape drives and optimize their usage.
Notes on Record Selection: Once again it isimportant to understand the flow of the sort and where the particular feature fits into this flow.
The resulting output from the sort can be dramatically different if the record selection is performed
in the output phase instead of the input phase.
Notes on Join Records: The disposition of records in each category can be controlled independently. The output from join processing can contain any combination of these record
types. This feature is similar to the join processing found in relational data bases.
JOIN RECORDS: If there are “m” number of records from the first file and “n” records in the second file that contain the same value in the join key, m*n records will be created.
SUMMATION: This processing will produce a data set that has only 1 record per sort key value unless an overflow occurs
If summation or averaging is requested and the resulting calculation will cause an overflow condition the summation or averaging is not done
Notes on Record Selection: Once again it isimportant to understand the flow of the sort and where the particular feature fits into this flow.
The resulting output from the sort can be dramatically different if the record selection is performed
in the output phase instead of the input phase.
Notes on Join Records: The disposition of records in each category can be controlled independently. The output from join processing can contain any combination of these record
types. This feature is similar to the join processing found in relational data bases.
USE CASES for REPORTS: An example of a report that provides three separate invoice status reports. These reports, titled “Unpaid Invoices,” “Partially Paid Invoices,” and “Fully Paid Invoices,” represent three output files generated with a single pass of the sort. For this example, you can print these reports or write them to disk or tape.
Syncsort ZPSaver is a set of enhanced technologies for MFX to offload copy, SMS compression,
and sort sort processing to zIIP processors, effectively reducing the workload on the main CPU.
ZPSaver can reduce TCB CPU time up to 95% in eligible Syncsort MFX applications
IBM’s Integrated Accelerator for Z Sort
New coprocessor designed for the z15 that reduces CPU usage and improves elapsed time
Speeds up sorting, shortens batch windows, and improves select database functions.
Precisely worked closely with IBM
Worked with HW architects in z/OS Poughkeepsie team to develop support for the sort accelerator.
Developed new algorithms in Syncsort MFX to take advantage of the coprocessor
Customers can expect to see dramatic improvements to batch sort job performance.
In our lab, we are seeing CPU and elapsed time improvements of up to 35% depending on the key length, record length, file size and some other factors.
The IBM Z Pervasive Encryption enables
Powerful encryption of data in-flight and at-rest
Highly secure ways to help deal with today’s compliance and regulatory requirements
Customers love the pervasive encryption approach, no one likes the additional resource consumption
The challenge to implementing data encryption is it consumes processor cycles, so it doesn’t come without penalty.
For Syncsort MFX users, the input or output data set is encrypted, BSAM must be used instead of our high performant low level IO access methods, and there is an extra cost from that perspective.
Once again working closely with IBM, we have identified a way to continue to use our low-level IO for encrypted data sets and improve encryption performance.
We have seen great performance improvement from our benchmark testing Although mainframe customers love the pervasive encryption approach, no one likes the additional resource consumption. .
Syncsort MFX alone, we have seen up to 45% CPU and 40% elapsed savings,
Combing Syncsort MFX with the Syncsort ZPSaver capability we have seen up to 80% CPU and 40% elapsed time savings.
Are you faced with the challenge of providing the right diagnostic information when reporting a failure?
Would you like a way to expedite the troubleshooting process?
Are you constrained by where you can store job information for further analysis?
What types of repositories or tools do you currently use for collecting job information for analysis?
Would you like more visibility into your sorting workloads?
What kind of analytical tools are you utilizing in other areas of the business to do analysis?
Splunk? Elastic?
What type of information would you like to see when evaluating your workloads?
Would you benefit from a solution that:
Identifies the key pieces of information to submit to support for review?
Limits or specifies the jobs or groups of jobs to analyze?
Provides examples of ways to view the information, or define your own reports/dashboards?
What other types of details would you like us to know?