The document summarizes updates made to IBM's COBOL Performance Tuning Paper, including:
1) New compiler options like BLOCK0 and XMLPARSE that can significantly impact performance, as well as runtime options like INTERRUPT.
2) Performance improvements to COBOL code generation that have occurred over many releases, allowing code to better exploit modern hardware.
3) Specific examples where performance optimizations like dynamic CALL under CICS provided orders of magnitude faster execution versus other approaches.
This document contains notes from a presentation on COBOL 5.1 given in April 2014 by Michael Erichsen. The presentation covered new features in COBOL 5.1 including support for XML, Unicode, IMS SQL, improved debugging tools using DWARF 4, performance optimizations, and considerations for migrating to COBOL 5.1 such as code and JCL changes and increased compilation storage requirements.
iSeries Modernization: RPG/400 to Java Migrationecubemarketing
eCube provides modernization, integration, replatforming and web-facing solutions that will extend the ROI of your RPG application. Learn more about eCube's transformation process for legacy RPG applications. http://ecubesystems.com/iseries.html
IMS 14 includes many new features to improve agility, application deployment and management, integration with DB2, business growth capabilities, infrastructure enhancements, and database and transaction manager enhancements. Key highlights include enhancements to support dynamic database changes, catalog management of resources, OSAM and DEDB improvements, SQL aggregation functions, DBRC and FDBR enhancements, reduced TCO, and cascaded transaction support across LPARs.
This document provides information and recommendations for using the IMS Catalog. Key points include:
- The Catalog acts as a metadata repository but is not a full data dictionary. It lacks definitions for business elements.
- Catalog structures are based on time stamps rather than relationships between changes.
- Multiple datasets may be needed to partition the Catalog data by DBD and PSB.
- Operational procedures include the DFSU3ACB utility and CATPOP job to update the Catalog. Image copies are recommended.
This document summarizes a presentation on Parallel Sysplex performance topics including Coupling Facility CPU instrumentation and structure-level performance analysis. It discusses enhancements in RMF reporting for CF CPU usage and XCF monitoring. Experiments were conducted to analyze CF CPU consumption and service times for different structures under increasing load. The document also covers matching CF CPU data between SMF records, structure duplexing performance impacts, and additional instrumentation available in newer releases.
Educational seminar lessons learned from customer db2 for z os health check...John Campbell
This presentation presented at the Polish DB2 User Group introduces and discusses the most common issues uncovered by the DB2 for z/OS Development SWAT Team from 360 Degree DB2 for z/OS Continuous Availability Assessment (DB2 360) Studies.
This document discusses the importance of zIIP capacity planning for IBM mainframes. It notes that zIIP utilization and eligible workloads are increasing. The document provides guidance on using instrumentation to measure zIIP usage at the address space level and considering LPAR configuration. Recent DB2 versions have made more CPU work eligible for offloading to zIIPs, changing the rules for zIIP capacity planning.
Motivations and Considerations for Migrating from SMTPD/Sendmail to CSSMTPzOSCommserver
Change is Coming: Motivation and Considerations for Migrating from SMTPD/Sendmail to CSSMTP
In July of 2015, IBM issued a statement of direction indicating that z/OS V2R2 Communications Server would be the last release to include the SMTPD Mail Gateway and Sendmail mail transports. In this session, we will discuss the reasons for this removal, review CSSMTP-related enhancements in V2R2, and look at considerations when migrating to the CSSMTP mail gateway from SMTPD.
This document contains notes from a presentation on COBOL 5.1 given in April 2014 by Michael Erichsen. The presentation covered new features in COBOL 5.1 including support for XML, Unicode, IMS SQL, improved debugging tools using DWARF 4, performance optimizations, and considerations for migrating to COBOL 5.1 such as code and JCL changes and increased compilation storage requirements.
iSeries Modernization: RPG/400 to Java Migrationecubemarketing
eCube provides modernization, integration, replatforming and web-facing solutions that will extend the ROI of your RPG application. Learn more about eCube's transformation process for legacy RPG applications. http://ecubesystems.com/iseries.html
IMS 14 includes many new features to improve agility, application deployment and management, integration with DB2, business growth capabilities, infrastructure enhancements, and database and transaction manager enhancements. Key highlights include enhancements to support dynamic database changes, catalog management of resources, OSAM and DEDB improvements, SQL aggregation functions, DBRC and FDBR enhancements, reduced TCO, and cascaded transaction support across LPARs.
This document provides information and recommendations for using the IMS Catalog. Key points include:
- The Catalog acts as a metadata repository but is not a full data dictionary. It lacks definitions for business elements.
- Catalog structures are based on time stamps rather than relationships between changes.
- Multiple datasets may be needed to partition the Catalog data by DBD and PSB.
- Operational procedures include the DFSU3ACB utility and CATPOP job to update the Catalog. Image copies are recommended.
This document summarizes a presentation on Parallel Sysplex performance topics including Coupling Facility CPU instrumentation and structure-level performance analysis. It discusses enhancements in RMF reporting for CF CPU usage and XCF monitoring. Experiments were conducted to analyze CF CPU consumption and service times for different structures under increasing load. The document also covers matching CF CPU data between SMF records, structure duplexing performance impacts, and additional instrumentation available in newer releases.
Educational seminar lessons learned from customer db2 for z os health check...John Campbell
This presentation presented at the Polish DB2 User Group introduces and discusses the most common issues uncovered by the DB2 for z/OS Development SWAT Team from 360 Degree DB2 for z/OS Continuous Availability Assessment (DB2 360) Studies.
This document discusses the importance of zIIP capacity planning for IBM mainframes. It notes that zIIP utilization and eligible workloads are increasing. The document provides guidance on using instrumentation to measure zIIP usage at the address space level and considering LPAR configuration. Recent DB2 versions have made more CPU work eligible for offloading to zIIPs, changing the rules for zIIP capacity planning.
Motivations and Considerations for Migrating from SMTPD/Sendmail to CSSMTPzOSCommserver
Change is Coming: Motivation and Considerations for Migrating from SMTPD/Sendmail to CSSMTP
In July of 2015, IBM issued a statement of direction indicating that z/OS V2R2 Communications Server would be the last release to include the SMTPD Mail Gateway and Sendmail mail transports. In this session, we will discuss the reasons for this removal, review CSSMTP-related enhancements in V2R2, and look at considerations when migrating to the CSSMTP mail gateway from SMTPD.
This document discusses challenges with traditional Java development approaches and how the z2 Environment addresses them. It notes that traditional approaches do not scale well for modular systems or large projects. Maven is useful for libraries but does not ensure source or system consistency. The z2 Environment provides a runtime that automatically updates based on source repository changes, ensuring a consistent environment across development, testing, and production.
The document provides an overview of IBM's Automatic Binary Optimizer (ABO) for z/OS. ABO improves performance of compiled COBOL programs without requiring source code changes or recompilation. It optimizes directly from compiled binaries to generate code targeting the latest IBM Z systems. Early customer results show performance gains ranging from 5-21%. The tool supports debugging and works with IBM and third party performance analysis tools. It performs whole-program optimization while preserving compatibility. Customers can use ABO for static selection to load optimized modules or dynamic selection using product infrastructure for optimization configuration and tracking.
IBM's Problem Determination Tools have evolved since their introduction in 2000 to become more robust and functionally superior through ongoing releases. Customers are migrating to the tools due to issues with older products, demands for more sophisticated development and testing tools, and rising maintenance fees for other solutions. The Problem Determination Tools suite features capabilities for supporting SOA/composite applications, optimizing performance, debugging applications, managing and testing data, and conducting various types of testing.
The document discusses the basics of the Eclipse debug platform, including an overview of key components like the standard debug model, launch framework, breakpoints, and variables view. It describes how these provide common abstractions and interfaces that debugger implementations can use to integrate with the platform and user interface. It also provides an example of implementing a debugger for a pushdown automaton language to demonstrate how this works.
O novo IBM COBOL ENTERPRISE V5/V6 para zOS e o IBM ABOPaulo Batuta
O documento discute as ofertas mais recentes da IBM para melhorar o desempenho de aplicações COBOL no z Systems, incluindo o Enterprise COBOL for z/OS V6.1 e o Automatic Binary Optimizer for z/OS V1.1. Essas soluções otimizam código COBOL existente e compilado para explorar melhor a arquitetura z, resultando em reduções de até 60% no tempo de CPU em alguns casos.
The document provides background information on COBOL (Common Business Oriented Language). It describes how COBOL was developed in 1959 to solve business problems in an English-like manner. It also discusses the structure and organization of COBOL programs, which include four main divisions - identification, environment, data, and procedure. The divisions provide information about the program, equipment, files, data, and instructions respectively.
Control/DCD and Control/SE are software tools that can help with migrating existing COBOL applications to newer Enterprise COBOL compilers. Control/DCD runs in batch mode on one or all programs, while Control/SE allows interactive processing of individual programs. These tools provide pre-compilation analysis of code to help identify issues during migration, as recompiling old COBOL programs without documentation risks introducing logic errors or reopening old bugs. They can help large organizations that rely heavily on legacy mainframe COBOL applications but now use offshore contractors for development and maintenance.
IBM's latest COBOL offerings, Enterprise COBOL for z/OS V6.1 and Automatic Binary Optimizer for z/OS V1.1, provide improved performance, scalability, and new features to modernize business critical COBOL applications. Enterprise COBOL V6.1 enhances scalability to compile and optimize very large COBOL programs and delivers release-to-release performance improvements of up to 5% through full support for the latest z13 and z13S hardware. The Automatic Binary Optimizer provides optimization of existing COBOL binaries without requiring source changes. Migrating from older compilers requires more work than previous migrations due to changes in system setup and syntax.
The document describes the z Environment, which provides a solution for managing the lifecycle of Java solutions. The z Environment allows solutions to be self-updating from source code, cost-effective to operate, and easy to distribute. It aims to make continuous integration and deployment of applications transparent, auditable, and versioned. Key aspects of the z Environment include synchronizing changes from source code repositories to runtime environments, managing system states, and supporting development, testing, and production.
Cobol is a robust, English-like programming language used widely in enterprise applications. It turned 50 years old in 2009 and remains heavily used due to the large amount of existing Cobol code and the challenges of migrating data to new systems. Cobol uses a structured programming style with four divisions - identification, environment, data, and procedure. The data and procedure divisions declare variables and contain the program logic. Cobol supports common control structures like conditional statements and loops. Records allow grouping of related data fields.
The document discusses various CPU benchmarks used to evaluate performance, including their pros and cons. It notes that synthetic benchmarks like Dhrystone and Whetstone have limitations and are outdated. Better benchmarks measure real applications or standardized workloads like CoreMark, which aims to replace Dhrystone by testing common algorithms like linked lists and matrices. The document also cautions that benchmarks can be manipulated and advocates for transparency in benchmarking methodology and results.
Continued Innovation in IBM z/System Sort Optimization with Syncsort MFXPrecisely
Mainframe departments in larger enterprises are challenged to keep up with rising data volumes from ever-changing applications running on the mainframe and even in hybrid cloud environments. There is continued pressure to squeeze more performance out of their existing mainframes to reduce costs and defer major upgrades. Obvious measures to get relief from these pressures revolve around the mundane processing tasks (sort, copy, merge, compression, report generation) that take up a large and disproportionate share of CPU processing time and related mainframe resources
That’s where Precisely’s Syncsort MFX has always delivered key benefits like optimized sorting, reduced CPU usage and faster job completion. Precisely has a 50 year history of innovation in this area and we continue to innovate our Syncsort MFX solution. We have several important enhancements coming for our customers. In addition, to highlighting this new functionality we want to share some powerful tips and tricks we have seen our customers use to improve their IBM z/System sorting.
Watch this on-demand webinar and learn:
• How Syncsort MFX can improve your sort performance and efficiency
• What enhancements are coming for Syncsort MFX customers
• Some tips and tricks for getting the most out of Syncsort MFX
Juha Kosonen, Nokia, Mika Rautakumpu, Nokia
The Open Compute Project (OCP) is a collaborative community focused on redesigning hardware technology to efficiently support the growing demands on compute infrastructure. The designs have been optimized to lower cost of infrastructure and operations e.g. by removing non-essential components, disaggregating rack level solution with common resources, and simplifying server serviceability.
OpenStack provides the foundation for the NFVI and MANO components within OPNFV. OPNFV releases Colorado and recent Danube have been successfully integrated to OCP hardware and running smoothly. Also hardware acceleration is supported. The concept itself has gained a lot of interest from mobile operators, some of them are running OPNFV on top of OCP hardware in their test laboratories too.
This presentation will introduce how OpenStack, OCP and OPNFV open source projects fits perfectly together.
1) Callgraph analysis of ATLAS software identified clusters of heavily called functions that could benefit from inlining to reduce instruction counts. Inlining requires changes to code and use of link-time optimization with profile guidance.
2) Avoiding position independent code may improve performance but reduce code sharing. Static libraries could allow link-time optimization.
3) Tools like IgProf, SystemTap and perf events can profile memory and performance, but a visualizer is needed to analyze object-oriented software. Sampling branch records may improve basic block counts.
The document discusses IBM's COBOL and Automatic Binary Optimizer (ABO) offerings. It provides an overview of COBOL compilers through time and how migrating from older compilers requires more work. It highlights new features in COBOL V6.2 like JSON support and condition compilation. Performance improvements from recompiling with COBOL V6.2 and from ABO V1.3 are noted, with examples showing significant speedups from new z14 vector instructions.
Hopla! Software is a Docker value-added reseller and distributor that provides 24x7 support in Spanish. They have a presence in several countries and 12 engineers. Their services include break/fix support, subscriptions, training, tools, and migration and architectural consulting. They help companies adopt Docker to improve development cycles, scalability, responsibilities segregation between teams, portability, and operational efficiency. Docker allows applications to be packaged in containers that provide consistent environments, enabling faster development and deployment, simplified testing and rollbacks, and higher server utilization.
This document provides an introduction to the COBOL programming language. It discusses the history of COBOL, including its development in 1959 and subsequent standardization efforts. It also describes some of the key features and structure of COBOL programs, including the four main divisions of Identification, Environment, Data, and Procedure. The Data Division describes data items, and the Procedure Division contains the program logic and algorithms. COBOL is well-suited for business applications and allows for readable names and detailed data descriptions.
Learn how to improve the performance of your Cognos environment. We cover hardware and server specifics, architecture setup, dispatcher tuning, report specific tuning including the Interactive Performance Assistant and more. See the recording and download this deck: https://senturus.com/resources/cognos-analytics-performance-tuning/
Senturus offers a full spectrum of services for business analytics. Our Knowledge Center has hundreds of free live and recorded webinars, blog posts, demos and unbiased product reviews available on our website at: https://senturus.com/resources/
The document describes Oracle's new SPARC T4 servers, which provide up to 5x better single-threaded performance than previous SPARC servers. The SPARC T4 servers are optimized for Oracle software like the Oracle Database and WebLogic Suite. They include integrated security features like encryption without performance penalties. The document provides an overview of the SPARC T4 processor architecture and performance advantages, and describes how the new servers are optimized solutions for running Oracle applications.
Webinar: High Performance MongoDB Applications with IBM POWER8MongoDB
Innovative companies are building Internet of Things, mobile, content management, single view, and big data apps on top of MongoDB. In this session, we'll explore how the IBM POWER8 platform brings new levels of performance and ease of configuration to these solutions which already benefit from easier and faster design and development using MongoDB.
This document discusses challenges with traditional Java development approaches and how the z2 Environment addresses them. It notes that traditional approaches do not scale well for modular systems or large projects. Maven is useful for libraries but does not ensure source or system consistency. The z2 Environment provides a runtime that automatically updates based on source repository changes, ensuring a consistent environment across development, testing, and production.
The document provides an overview of IBM's Automatic Binary Optimizer (ABO) for z/OS. ABO improves performance of compiled COBOL programs without requiring source code changes or recompilation. It optimizes directly from compiled binaries to generate code targeting the latest IBM Z systems. Early customer results show performance gains ranging from 5-21%. The tool supports debugging and works with IBM and third party performance analysis tools. It performs whole-program optimization while preserving compatibility. Customers can use ABO for static selection to load optimized modules or dynamic selection using product infrastructure for optimization configuration and tracking.
IBM's Problem Determination Tools have evolved since their introduction in 2000 to become more robust and functionally superior through ongoing releases. Customers are migrating to the tools due to issues with older products, demands for more sophisticated development and testing tools, and rising maintenance fees for other solutions. The Problem Determination Tools suite features capabilities for supporting SOA/composite applications, optimizing performance, debugging applications, managing and testing data, and conducting various types of testing.
The document discusses the basics of the Eclipse debug platform, including an overview of key components like the standard debug model, launch framework, breakpoints, and variables view. It describes how these provide common abstractions and interfaces that debugger implementations can use to integrate with the platform and user interface. It also provides an example of implementing a debugger for a pushdown automaton language to demonstrate how this works.
O novo IBM COBOL ENTERPRISE V5/V6 para zOS e o IBM ABOPaulo Batuta
O documento discute as ofertas mais recentes da IBM para melhorar o desempenho de aplicações COBOL no z Systems, incluindo o Enterprise COBOL for z/OS V6.1 e o Automatic Binary Optimizer for z/OS V1.1. Essas soluções otimizam código COBOL existente e compilado para explorar melhor a arquitetura z, resultando em reduções de até 60% no tempo de CPU em alguns casos.
The document provides background information on COBOL (Common Business Oriented Language). It describes how COBOL was developed in 1959 to solve business problems in an English-like manner. It also discusses the structure and organization of COBOL programs, which include four main divisions - identification, environment, data, and procedure. The divisions provide information about the program, equipment, files, data, and instructions respectively.
Control/DCD and Control/SE are software tools that can help with migrating existing COBOL applications to newer Enterprise COBOL compilers. Control/DCD runs in batch mode on one or all programs, while Control/SE allows interactive processing of individual programs. These tools provide pre-compilation analysis of code to help identify issues during migration, as recompiling old COBOL programs without documentation risks introducing logic errors or reopening old bugs. They can help large organizations that rely heavily on legacy mainframe COBOL applications but now use offshore contractors for development and maintenance.
IBM's latest COBOL offerings, Enterprise COBOL for z/OS V6.1 and Automatic Binary Optimizer for z/OS V1.1, provide improved performance, scalability, and new features to modernize business critical COBOL applications. Enterprise COBOL V6.1 enhances scalability to compile and optimize very large COBOL programs and delivers release-to-release performance improvements of up to 5% through full support for the latest z13 and z13S hardware. The Automatic Binary Optimizer provides optimization of existing COBOL binaries without requiring source changes. Migrating from older compilers requires more work than previous migrations due to changes in system setup and syntax.
The document describes the z Environment, which provides a solution for managing the lifecycle of Java solutions. The z Environment allows solutions to be self-updating from source code, cost-effective to operate, and easy to distribute. It aims to make continuous integration and deployment of applications transparent, auditable, and versioned. Key aspects of the z Environment include synchronizing changes from source code repositories to runtime environments, managing system states, and supporting development, testing, and production.
Cobol is a robust, English-like programming language used widely in enterprise applications. It turned 50 years old in 2009 and remains heavily used due to the large amount of existing Cobol code and the challenges of migrating data to new systems. Cobol uses a structured programming style with four divisions - identification, environment, data, and procedure. The data and procedure divisions declare variables and contain the program logic. Cobol supports common control structures like conditional statements and loops. Records allow grouping of related data fields.
The document discusses various CPU benchmarks used to evaluate performance, including their pros and cons. It notes that synthetic benchmarks like Dhrystone and Whetstone have limitations and are outdated. Better benchmarks measure real applications or standardized workloads like CoreMark, which aims to replace Dhrystone by testing common algorithms like linked lists and matrices. The document also cautions that benchmarks can be manipulated and advocates for transparency in benchmarking methodology and results.
Continued Innovation in IBM z/System Sort Optimization with Syncsort MFXPrecisely
Mainframe departments in larger enterprises are challenged to keep up with rising data volumes from ever-changing applications running on the mainframe and even in hybrid cloud environments. There is continued pressure to squeeze more performance out of their existing mainframes to reduce costs and defer major upgrades. Obvious measures to get relief from these pressures revolve around the mundane processing tasks (sort, copy, merge, compression, report generation) that take up a large and disproportionate share of CPU processing time and related mainframe resources
That’s where Precisely’s Syncsort MFX has always delivered key benefits like optimized sorting, reduced CPU usage and faster job completion. Precisely has a 50 year history of innovation in this area and we continue to innovate our Syncsort MFX solution. We have several important enhancements coming for our customers. In addition, to highlighting this new functionality we want to share some powerful tips and tricks we have seen our customers use to improve their IBM z/System sorting.
Watch this on-demand webinar and learn:
• How Syncsort MFX can improve your sort performance and efficiency
• What enhancements are coming for Syncsort MFX customers
• Some tips and tricks for getting the most out of Syncsort MFX
Juha Kosonen, Nokia, Mika Rautakumpu, Nokia
The Open Compute Project (OCP) is a collaborative community focused on redesigning hardware technology to efficiently support the growing demands on compute infrastructure. The designs have been optimized to lower cost of infrastructure and operations e.g. by removing non-essential components, disaggregating rack level solution with common resources, and simplifying server serviceability.
OpenStack provides the foundation for the NFVI and MANO components within OPNFV. OPNFV releases Colorado and recent Danube have been successfully integrated to OCP hardware and running smoothly. Also hardware acceleration is supported. The concept itself has gained a lot of interest from mobile operators, some of them are running OPNFV on top of OCP hardware in their test laboratories too.
This presentation will introduce how OpenStack, OCP and OPNFV open source projects fits perfectly together.
1) Callgraph analysis of ATLAS software identified clusters of heavily called functions that could benefit from inlining to reduce instruction counts. Inlining requires changes to code and use of link-time optimization with profile guidance.
2) Avoiding position independent code may improve performance but reduce code sharing. Static libraries could allow link-time optimization.
3) Tools like IgProf, SystemTap and perf events can profile memory and performance, but a visualizer is needed to analyze object-oriented software. Sampling branch records may improve basic block counts.
The document discusses IBM's COBOL and Automatic Binary Optimizer (ABO) offerings. It provides an overview of COBOL compilers through time and how migrating from older compilers requires more work. It highlights new features in COBOL V6.2 like JSON support and condition compilation. Performance improvements from recompiling with COBOL V6.2 and from ABO V1.3 are noted, with examples showing significant speedups from new z14 vector instructions.
Hopla! Software is a Docker value-added reseller and distributor that provides 24x7 support in Spanish. They have a presence in several countries and 12 engineers. Their services include break/fix support, subscriptions, training, tools, and migration and architectural consulting. They help companies adopt Docker to improve development cycles, scalability, responsibilities segregation between teams, portability, and operational efficiency. Docker allows applications to be packaged in containers that provide consistent environments, enabling faster development and deployment, simplified testing and rollbacks, and higher server utilization.
This document provides an introduction to the COBOL programming language. It discusses the history of COBOL, including its development in 1959 and subsequent standardization efforts. It also describes some of the key features and structure of COBOL programs, including the four main divisions of Identification, Environment, Data, and Procedure. The Data Division describes data items, and the Procedure Division contains the program logic and algorithms. COBOL is well-suited for business applications and allows for readable names and detailed data descriptions.
Learn how to improve the performance of your Cognos environment. We cover hardware and server specifics, architecture setup, dispatcher tuning, report specific tuning including the Interactive Performance Assistant and more. See the recording and download this deck: https://senturus.com/resources/cognos-analytics-performance-tuning/
Senturus offers a full spectrum of services for business analytics. Our Knowledge Center has hundreds of free live and recorded webinars, blog posts, demos and unbiased product reviews available on our website at: https://senturus.com/resources/
The document describes Oracle's new SPARC T4 servers, which provide up to 5x better single-threaded performance than previous SPARC servers. The SPARC T4 servers are optimized for Oracle software like the Oracle Database and WebLogic Suite. They include integrated security features like encryption without performance penalties. The document provides an overview of the SPARC T4 processor architecture and performance advantages, and describes how the new servers are optimized solutions for running Oracle applications.
Webinar: High Performance MongoDB Applications with IBM POWER8MongoDB
Innovative companies are building Internet of Things, mobile, content management, single view, and big data apps on top of MongoDB. In this session, we'll explore how the IBM POWER8 platform brings new levels of performance and ease of configuration to these solutions which already benefit from easier and faster design and development using MongoDB.
The state of Hive and Spark in the Cloud (July 2017)Nicolas Poggi
Originally presented at the BDOOP and Spark Barcelona meetup groups: http://meetu.ps/3bwCTM
Cloud providers currently offer convenient on-demand managed big data clusters (PaaS) with a pay-as-you-go model. In PaaS, analytical engines such as Spark and Hive come ready to use, with a general-purpose configuration and upgrade management. Over the last year, the Spark framework and APIs have been evolving very rapidly, with major improvements on performance and the release of v2, making it challenging to keep up-to-date production services both on-premises and in the cloud for compatibility and stability. The talk compares:
• The performance of both v1 and v2 for Spark and Hive
• PaaS cloud services: Azure HDinsight, Amazon Web Services EMR, Google Cloud Dataproc
• Out-of-the-box support for Spark and Hive versions from providers
• PaaS reliability, scalability, and price-performance of the solutions
Using BigBench, the new Big Data benchmark standard. BigBench combines SQL queries, MapReduce, user code (UDF), and machine learning, which makes it ideal to stress Spark libraries (SparkSQL, DataFrames, MLlib, etc.).
The document does not contain enough content to summarize. It only contains the word "Adv" which provides no meaningful context or information to extract a multi-sentence summary from.
Service-Level Objective for Serverless Applicationsalekn
Deploying commercial applications that meet their expected business needs is challenging due to the differences between how business goals are specified and how the system is evaluated. Furthermore, business goals are dynamic, requiring deployment to change constantly over time. Such difficulties make it costly to maintain application quality as the underlying infrastructure is not always fast enough to keep up with business changes. Nowadays, serverless opens a new approach to build application. By abstracting out the deployment details, serverless application can be implemented with minimum deployment efforts. Serverless also reduces maintenance cost with auto-scaling and pay-as-you-go. Such abilities make us believe that by adopting serverless, we can build application that can meet and quickly adapt to business goals.
However, simply writing applications with serverless is not sufficient. Due to best-effort invocation mechanisms and the lack of application structure awareness, serverless performance is highly variable and often fails to support applications with rigorous quality of service requirements. In this study, we aim to mitigate such limitations by coupling serverless deployment with business needs. In particular, we define an Serverless Service-Level Objective (SLO) interface that allows developers to describe their application structure and business goals in terms of software-level objectives. We implement an SLO enforcer, which uses this information in combination with the system performance metrics to decide a proper serverless deployment and resource allocation for meeting business goals. The Serverless SLO leverages blueprint model, which allow developers to describe applications' architecture and runtime characteristics needs, to map application description to serverless function deployment on the top of Knative. We deploy our proposed system on KinD, a tool to run Kubernetes cluster over our local Docker container, and evaluate it with different system configurations. Evaluation results showed that SLO definition and enforcement helps serverless application use resources in accordance with business goals.
OpenPOWER Acceleration of HPCC SystemsHPCC Systems
JT Kellington, IBM and Allan Cantle, Nallatech present at the 2015 HPCC Systems Engineering Summit Community Day about porting HPCC Systems to the POWER8-based ppc64el architecture.
CICS TS 52 provides enhancements in three key areas:
1) Service Agility - Added support for RESTful services, JSON processing, and Liberty profile updates.
2) Operational Efficiency - New resource monitoring policies and support for TLS 1.2 and FIPS standards.
3) Cloud Enablement - Simplified deployment patterns and region management capabilities.
We cover the IBM solution for HPC. In addition to hardware and software stack we show how the rational choice of compilation/running parameters helps to significantly improve the performance of technical computing applications.
This document discusses various techniques for optimizing computer code, including:
1. Local optimizations that improve performance within basic blocks, such as constant folding, propagation, and elimination of redundant computations.
2. Global optimizations that analyze control flow across basic blocks, such as common subexpression elimination.
3. Loop optimizations that improve performance of loops by removing invariant data and induction variables.
4. Machine-dependent optimizations like peephole optimizations that replace instructions with more efficient alternatives.
The goal of optimizations is to improve speed and efficiency while preserving program meaning and correctness. Optimizations can occur at multiple stages of development and compilation.
DB2 11 for z/OS Migration Planning and Early Customer ExperiencesJohn Campbell
This extensive presentation provides help and guidance to help DB2 for z/OS customer migrate as quickly as possible, but safely to V11. The material will provide additional planning information, share customer customer experiences and best practices.
The document discusses the CICS Loader Domain and Local Shared Resources (LSR) pools. It provides an overview of the CICS loader from pre-ESA to TS 5.1, describing control blocks like the CPE, APE, and changes made in TS 5.1 to use 64-bit addressing. It also discusses issues with LSR pools like buffers in 31-bit storage and odd control interval sizes wasting space. Potential improvements are suggested like moving buffers to 64-bit storage and allowing CICS to dynamically select LSR pools and configuration.
Similar to Cobol performance tuning paper lessons learned - s8833 tr (20)
Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...Dr. Vinod Kumar Kanvaria
Exploiting Artificial Intelligence for Empowering Researchers and Faculty,
International FDP on Fundamentals of Research in Social Sciences
at Integral University, Lucknow, 06.06.2024
By Dr. Vinod Kumar Kanvaria
This slide is special for master students (MIBS & MIFB) in UUM. Also useful for readers who are interested in the topic of contemporary Islamic banking.
Executive Directors Chat Leveraging AI for Diversity, Equity, and InclusionTechSoup
Let’s explore the intersection of technology and equity in the final session of our DEI series. Discover how AI tools, like ChatGPT, can be used to support and enhance your nonprofit's DEI initiatives. Participants will gain insights into practical AI applications and get tips for leveraging technology to advance their DEI goals.
The simplified electron and muon model, Oscillating Spacetime: The Foundation...RitikBhardwaj56
Discover the Simplified Electron and Muon Model: A New Wave-Based Approach to Understanding Particles delves into a groundbreaking theory that presents electrons and muons as rotating soliton waves within oscillating spacetime. Geared towards students, researchers, and science buffs, this book breaks down complex ideas into simple explanations. It covers topics such as electron waves, temporal dynamics, and the implications of this model on particle physics. With clear illustrations and easy-to-follow explanations, readers will gain a new outlook on the universe's fundamental nature.
Thinking of getting a dog? Be aware that breeds like Pit Bulls, Rottweilers, and German Shepherds can be loyal and dangerous. Proper training and socialization are crucial to preventing aggressive behaviors. Ensure safety by understanding their needs and always supervising interactions. Stay safe, and enjoy your furry friends!
Cobol performance tuning paper lessons learned - s8833 tr
1. COBOL Performance Tuning Paper -
lessons learned
Speaker: Tom Ross
IBM
March 2
Session Number 8833
2. 2
Agenda
• Performance of COBOL compilers - myths and realities
• Performance improvements over the years
• Highlights of updates to Performance Tuning Paper
• Coding tips
3. 3
Myths and Realities
• Performance of COBOL compilers - myths and realities
• IBM marketing materials imply performance improvements
• Improved generated code is available in PL/I and C/C++
• Wishful thinking adds to the misconception
• IBM COBOL compilers are extremely efficient!
• Dev process includes regular performance scrutiny
• COBOL does run faster on newer processors
4. 4
? Compilers exploit new hardware instructions introduced by System z (z9? z10?)
? Code generated by the compilers is highly tuned for System z
? Boost in performance of applications running on System z
z/OS XL C/C++
? standards compliant C/C++ compilers to support porting code
? METAL C compiler option to support low-level programming
Enterprise COBOL for z/OS
? support for modernization of applications (XML support and
Java support)
? integration with middleware such as CICS, DB2, and IMS
Enterprise PL/I for z/OS
? facilitates repurposing of existing business processes into new
business models
? Integration with IBM middleware (CICS, DB2, and IMS)
? 135 new / changed Instructions (z196)
IBM Compilers exploit System z for Maximum Performance
5. 5
Myths/facts about z196 and COBOL
• The 135 new and changed instructions were added or
changed for many reasons, not just for performance
• Examples:
• Cryptographic facility instructions
• BFP and DFP floating point instructions
• The z196 processor processes all instructions faster than
the z10 does, even old COBOL!
• IBM COBOL development is working on a new compiler
design to make it easier to exploit new hardware
instructions as they are introduced
6. 6
Summary of new z196 instructions
• The IBM zEnterprise 196 provides a broad range of new facilities to
improve performance and function:
• High-word facility (30 instructions)
• Interlocked-access facility (12 instructions)
• Load/store-on-condition facility (6 instructions)
• Distinct-operands facility (22 instructions)
• Population-count facility (1 instruction)
• Enhanced-floating-point facility (25 new, 30 changed instructions)
• MSA-X4 facility (4 new, 3 changed instructions, new functions)
• Etc.
• Potential for:
• Significant performance improvement
• Enhanced capabilities
• Simpler code
Index
7. 7
Performance improvements over the
years for COBOL compilers
• VS COBOL II
• Many performance improvements over 6 releases result in
very fast code produced by IBM COBOL compilers
• COBOL V2R2
• Significant performance improvement in processing binary
data with the TRUNC(BIN) compiler option
• COBOL V4R1
• Performance of COBOL application programs has been
enhanced by exploitation of new z/Architecture® instructions.
The performance of COBOL Unicode support (USAGE
NATIONAL data) has been significantly improved.
8. 8
Performance tuning paper updated
• As the result of a SHARE requirement, we were able to
apply resources to getting the COBOL Performance
Tuning Paper updated for COBOL V4R2
• The last time it was updated was for COBOL V3R1, 2001
• Online at:
http://www-01.ibm.com/software/awdtools/cobol/zos/library/
• New info since V3R1 version:
• BLOCK0, XMLPARSE, INTERRUPT
• Updated section
• CICS communication
9. 9
Performance tuning paper updated
• BLOCK0 compiler option
• New V4R2 option to change default behavior for QSAM
output files
• For 40 years, no BLOCK CONTAINS clause meant:
• BLOCK CONTAINS 1 RECORD
• The slowest possible!
• Counterpoint: the file is always current
• BLOCK0 changes the compiler default for QSAM files from
unblocked to blocked (as if BLOCK CONTAINS 0 were
specified) and thus gain the benefit of system-determined
blocking for output files.
10. 10
Performance tuning paper updated
• Specifying BLOCK0 activates an implicit BLOCK
CONTAINS 0 clause for each file in the program that
meets the following three criteria:
• The FILE-CONTROL paragraph either specifies
ORGANIZATION SEQUENTIAL or omits the
ORGANIZATION clause.
• The FD entry does not specify RECORDING MODE U.
• The FD entry does not specify a BLOCK CONTAINS clause.
11. 11
Performance tuning paper updated
• BLOCK 0 compiler option…results?
• Performance considerations using BLOCK0 on a program
with a file that meets the criteria:
• One program using BLOCK0 was 88% faster than using
NOBLOCK0 and used 98% fewer EXCPs.
12. 12
Performance tuning paper updated
• XMLPARSE compiler option
• There are 3 parsers in COBOL today
• COBOL V3 parser, available in V4 as XMLPARSE(COMPAT)
• Selected by compiler option
• XMLSS non-validating parser (COBOL V4R1)
• Selected by compiler option
• XMLSS validating parser (COBOL V4R2)
• Selected by compiler option + VALIDATING WITH clause
• Do not change to XMLSS from V3 (COMPAT) parser unless
you need the extra functionality!
• Customer feedback and testing show it is a lot slower
13. 13
Performance tuning paper updated
• XMLPARSE compiler option…Results?
• Performance considerations for XML PARSE example:
• Five programs using XML PARSE were from 20% to 108%
slower when using XMLPARSE(XMLSS) compared to using
XMLPARSE(COMPAT).
14. 14
Performance tuning paper updated
• INTERRUPT run-time option
• The 3R1 version of performance tuning paper did not
cover this option
• The INTERRUPT option causes attention interrupts to be
recognized by Language Environment. When you cause
an interrupt, Language Environment can give control to
your application or to Debug Tool.
• Performance considerations using INTERRUPT:
• On the average, INTERRUPT(ON) was 1% slower than
INTERRUPT(OFF), with a range of equivalent to 18% slower
15. 15
Performance tuning paper updated
• SIMVRD run-time option…removed support!
• The SIMVRD option specifies whether COBOL programs
use a VSAM KSDS to simulate variable-length relative
organization data set. This support is only available with VS
COBOL II through Enterprise COBOL Version 3 programs.
Starting with Enterprise COBOL Version 4 programs, this
support is no longer available.
• Performance considerations using SIMVRD:
• One VSAM test case compiled with Enterprise COBOL 3.4
was 5% slower when using SIMVRD compared to
NOSIMVRD.
• Those concerned with performance will not miss SIMVRD!
16. 16
Performance tuning paper updated
• Program communication under CICS
• Choices: static CALL, dynamic CALL or EXEC CICS LINK
• In many cases EXEC CICS LINK can be replaced with
COBOL dynamic CALL (similar separate load module
characteristic)
• DYNAM compiler option is not allowed for programs with
EXEC CICS statements in CICS, so you must use CALL
identifier to do dynamic CALL in these cases
• In some cases dynamic CALL cannot replace CICS LINK:
• Cross systems EXEC CICS LINK
• If subprograms ABEND or STOP RUN, they will stop the
caller unless EXEC CICS LINK is used
17. 17
Performance tuning paper updated
• Program communication under CICS
• Performance considerations using CICS (measuring call
overhead only):
• One test case was 446% slower using EXEC CICS LINK
compared to using COBOL dynamic CALL with
CBLPSHPOP(ON)
• The same test case was 7817% slower using EXEC CICS
LINK compared to using COBOL dynamic CALL with
CBLPSHPOP(OFF)
• The same test case was 1350% slower using COBOL
dynamic CALL with CBLPSHPOP(ON) compared to using
COBOL dynamic CALL with CBLPSHPOP(OFF)
18. 18
Performance tuning paper updated
• To show the magnitude of the difference in CPU times
between the above methods, here are the CPU times that
were obtained from running each of these tests on our
system and may not be representative of the results on
your system.
0.006COBOL dynamic CALL
CBLPSHPOP(OFF)
0.087COBOL dynamic CALL
CBLPSHPOP(ON)
0.475EXEC CICS LINK
CPU Time (seconds)'call' type
19. 19
Performance tuning paper updated
• COBOL normally either ignores decimal overflow
conditions or handles them by checking the condition code
after the decimal instruction. ILC triggers switch to a
language-neutral or ILC program mask
• This ILC program mask enables decimal overflow
• (COBOL-only program mask ignores overflow)
• COBOL code also tests condition after decimal instructions
• Overflows cause program calls to condition handling
• Overflows can be very common in COBOL
• Result: COBOL math can get bogged down
20. 20
Performance tuning paper updated
• Performance considerations for a mixed COBOL with C or
PL/I application with COBOL using PACKED-DECIMAL
data types in 100,000 arithmetic statements that cause a
decimal overflow condition (100,000 overflows):
• Without C or PL/I: .040 seconds of CPU time
• With C or PL/I: 1.636 seconds of CPU time
21. 21
Performance tuning paper updated
• XML GENERATE and XML PARSE result in bringing a C
signature into your module - ILC!
• Solutions?
• If XML processing is a special case, move XML processing
into a dynamically called subroutine
• Process XML in separate processes if possible
22. 22
Performance tuning paper updated
• SEARCH - binary versus serial
• We got the question: Is there a point (a small enough
number of items searched) where a serial search is faster
than a binary SEARCH?
• Answer: it depends on your data!
• Performance considerations for search example:
• Using a binary search (SEARCH ALL) to search a 100-
element table was 15% faster than using a sequential search
(SEARCH)
• Using a binary search (SEARCH ALL) to search a 1000-
element table was 500% faster than using a sequential
search (SEARCH)
23. 23
Performance tuning paper updated
• UPPER and LOWER case conversion
• When converting data to upper or lower case, it is
generally more efficient to use INSPECT CONVERTING
than the intrinsic functions FUNCTION UPPER-CASE or
FUNCTION LOWER-CASE.
• Performance considerations for character conversions:
• One test case that does 1,000 uppercase conversions was
35% faster when using INSPECT CONVERTING compared
to using FUNCTION UPPER-CASE or FUNCTION LOWER-
CASE
• For this same test case, these intrinsic functions used 70%
more storage than INSPECT CONVERTING
24. 24
Performance tuning paper updated
• Initializing Data
• The INITIALIZE statement sets selected categories of data
fields to predetermined values.
• However, it is inefficient to initialize an entire group unless
you really need all the items in the group to be initialized to
different value.
• If you have a group that contains OCCURS data items and
you want to set all items in the group to the same
character (for example, space or x'00'), it is generally more
efficient to use a MOVE statement instead of the
INITIALIZE statement.
25. 25
Performance tuning paper updated
• Initializing Data
• Performance considerations for INITIALIZE on a program
that has 5 OCCURS clauses in the group:
• When each OCCURS clause in the group contained 100
elements, a MOVE to the group was 8% faster than an
INITIALIZE of the group.
• When each OCCURS clause in the group contained 1000
elements, a MOVE to the group was 23% faster than an
INITIALIZE of the group.
26. 26
Coding tips from customer situations
• Avoid INITIALIZE unless the functionality is really needed
• Much faster to MOVE SPACES or x'00' to the group
• If individual fields need to be set to spaces or different types
of zero (external decimal, packed-decimal, numeric-edited)
then by all means use INITIALIZE
• Rule: Don't use INITIALIZE just because it is there!
27. 27
Coding tips from customer situations
• One customer got recommendation from consultant to
code in Java instead of COBOL
• Customer would have preferred to code in COBOL
• He complained of continued issues with slow performance
and missing Service Level Agreements(SLAs) due to poor
Java performance
28. 28
Coding tips from customer situations
• One customer found that COBOL performance was better
than PL/I and wanted to start using only COBOL for new
applications (they are 50/50 COBOL and PL/I)
• The customer wanted to have replacements for commonly
used PL/I functions:
• VERIFY
• TRIM
• INDEX
• When they tried to code these in COBOL they found they
were too slow
• They asked me to try to do better…
29. 29
Coding tips from customer situations
* VERIFY PL/I function written in COBOL: slow
MOVE '02.04.2010' TO TEXT1
MOVE TEXT1 TO TEXT2
INSPECT TEXT2 REPLACING ALL '.' BY '0'
IF TEXT2 IS NOT NUMERIC
MOVE 'NOT DATE' TO TEXT1
END-IF
30. 30
Coding tips from customer situations
* VERIFY PL/I function written in COBOL: 40% faster
SPECIAL-NAMES.
CLASS VDATE IS '0' thru '9' '.'.
. . .
MOVE '02.04.2010' TO TEXT1
IF TEXT1 IS Not VDATE Then
MOVE 'NOT DATE' TO TEXT1
END-IF
31. 31
Coding tips from customer situations
* TRIM PL/I function written in COBOL: slow
MOVE ' This is string 1 ' TO TEXT1
COMPUTE POS1 POS2 = 0
INSPECT TEXT1
TALLYING POS1
FOR LEADING SPACES
INSPECT FUNCTION REVERSE(TEXT1)
TALLYING POS2
FOR LEADING SPACES
MOVE TEXT1(POS1:LENGTH OF TEXT1 - POS2 - POS1)
TO TEXT2
32. 32
Coding tips from customer situations
* TRIM PL/I function written in COBOL: 31% faster
MOVE ' This is string 1 ' TO TEXT1
PERFORM VARYING POS1 FROM 1 BY 1
UNTIL TEXT1(POS1:1) NOT = SPACE
END-PERFORM
PERFORM VARYING POS2 FROM LENGTH OF TEXT1
BY -1 UNTIL TEXT1(POS2:1) NOT = SPACE
END-PERFORM
COMPUTE LEN = POS2 - POS1 + 1
MOVE TEXT1(POS1 : LEN) TO TEXT2 (1 : LEN)
33. 33
Coding tips from customer situations
* INDEX PL/I function written in COBOL: slow
MOVE 'TestString1 TestString2' TO BUFFER
COMPUTE POS = 0
INSPECT BUFFER
TALLYING POS
FOR CHARACTERS
BEFORE INITIAL 'TestString2'
34. 34
Coding tips from customer situations
* INDEX PL/I function written in COBOL: 83% faster
MOVE 'TestString1 TestString2' TO BUFFER
PERFORM VARYING POS FROM 1 BY 1
UNTIL BUFFER(POS:11) = 'TestString2'
END-PERFORM
35. 35
Questions about formatted dumps
• One program with a large data division (about 1 million
items) using TEST(NOHOOK) took 330 times more CPU
time to produce a CEEDUMP with COBOL's formatted
variables compared to using NOTEST to produce a
CEEDUMP without COBOL's formatted variables.
• Do you use formatted dumps?
• IE: Compile with TEST(NOHOOK) or TEST(NONE) for
production programs
• Do you care about DUMP performance?
• Usually not done in online environments