In this presentation, attendees will learn how to determine the best approach for tuning SQL statements and other issues by identifying specific causes of slow performance. Real-life case studies will be used to demonstrate:
1.How to use Response Time Analysis (RTA) to quickly identify the biggest problems in a database
2.How to utilize the MDA tables to understand what resources are being used or waited on
3.Tips and techniques on examining and further fine-tuning the execution plan
The document provides tips and guidelines for optimizing performance of SAP Sybase ASE, covering planning, configuration, running/monitoring, and troubleshooting. It discusses topics like hardware sizing, operating system configuration, database configuration parameters, query optimization goals, monitoring tools like MDA, and troubleshooting techniques using set options and MDA tables. The goal is to provide novices and experienced DBAs with information to enhance ASE performance.
Real-time Analytics with Presto and Apache PinotXiang Fu
Presto Con 2021
In this world, most analytics products either focus on ad-hoc analytics, which requires query flexibility without guaranteed latency, or low latency analytics with limited query capability.
In this talk, we will explore how to get the best of both worlds using Apache Pinot and Presto:
1. How people do analytics today to trade-off Latency and Flexibility: Comparison over analytics on raw data vs pre-join/pre-cube dataset.
2. Introduce Apache Pinot as a column store for fast real-time data analytics and Presto Pinot Connector to cover the entire landscape.
3. Deep dive into Presto Pinot Connector to see how the connector does predicate and aggregation push down.
4. Benchmark results for Presto Pinot connector.
ASE Performance and Tuning Parameters Beyond the cfg FileSAP Technology
The ASE configuration file contains a long list of changeable parameters, but many parameter still exists outside of the main configuration file. This session will be a discussion of hidden gems in other places that can make performance better for queries, networking, and system administration.
Step by Step installation of SAP BusinessObjects BI Platform 4.0 on Windows 2...Jorge Batista
The document outlines the step-by-step process for installing SAP BusinessObjects BI Platform 4.0 on Windows 2008R2 and Oracle. It describes selecting prerequisites, license agreements, user information, installation type, database configuration, configuring Tomcat application server, CMS account, ports, and finishing the installation process. The key steps are to choose an existing Oracle database for the CMS and auditing repositories, install and configure Tomcat, specify server and port settings, and complete the post-installation configuration.
ebpf and IO Visor: The What, how, and what next!Affan Syed
Extended BPF (eBPF) provides a mechanism for running custom programs inside the Linux kernel that can be used for filtering network packets, monitoring system activity, and more. eBPF programs are written in a restricted subset of C and compiled to bytecode that is verified by the kernel for safety before being run. The BCC toolkit makes it easier to write and load eBPF programs. The IO Visor project aims to further develop eBPF and provide tools and use cases for networking, security, and system tracing applications.
In this presentation, attendees will learn how to determine the best approach for tuning SQL statements and other issues by identifying specific causes of slow performance. Real-life case studies will be used to demonstrate:
1.How to use Response Time Analysis (RTA) to quickly identify the biggest problems in a database
2.How to utilize the MDA tables to understand what resources are being used or waited on
3.Tips and techniques on examining and further fine-tuning the execution plan
The document provides tips and guidelines for optimizing performance of SAP Sybase ASE, covering planning, configuration, running/monitoring, and troubleshooting. It discusses topics like hardware sizing, operating system configuration, database configuration parameters, query optimization goals, monitoring tools like MDA, and troubleshooting techniques using set options and MDA tables. The goal is to provide novices and experienced DBAs with information to enhance ASE performance.
Real-time Analytics with Presto and Apache PinotXiang Fu
Presto Con 2021
In this world, most analytics products either focus on ad-hoc analytics, which requires query flexibility without guaranteed latency, or low latency analytics with limited query capability.
In this talk, we will explore how to get the best of both worlds using Apache Pinot and Presto:
1. How people do analytics today to trade-off Latency and Flexibility: Comparison over analytics on raw data vs pre-join/pre-cube dataset.
2. Introduce Apache Pinot as a column store for fast real-time data analytics and Presto Pinot Connector to cover the entire landscape.
3. Deep dive into Presto Pinot Connector to see how the connector does predicate and aggregation push down.
4. Benchmark results for Presto Pinot connector.
ASE Performance and Tuning Parameters Beyond the cfg FileSAP Technology
The ASE configuration file contains a long list of changeable parameters, but many parameter still exists outside of the main configuration file. This session will be a discussion of hidden gems in other places that can make performance better for queries, networking, and system administration.
Step by Step installation of SAP BusinessObjects BI Platform 4.0 on Windows 2...Jorge Batista
The document outlines the step-by-step process for installing SAP BusinessObjects BI Platform 4.0 on Windows 2008R2 and Oracle. It describes selecting prerequisites, license agreements, user information, installation type, database configuration, configuring Tomcat application server, CMS account, ports, and finishing the installation process. The key steps are to choose an existing Oracle database for the CMS and auditing repositories, install and configure Tomcat, specify server and port settings, and complete the post-installation configuration.
ebpf and IO Visor: The What, how, and what next!Affan Syed
Extended BPF (eBPF) provides a mechanism for running custom programs inside the Linux kernel that can be used for filtering network packets, monitoring system activity, and more. eBPF programs are written in a restricted subset of C and compiled to bytecode that is verified by the kernel for safety before being run. The BCC toolkit makes it easier to write and load eBPF programs. The IO Visor project aims to further develop eBPF and provide tools and use cases for networking, security, and system tracing applications.
Real-time Analytics with Trino and Apache PinotXiang Fu
Trino summit 2021:
Overview of Trino Pinot Connector, which bridges the flexibility of Trino's full SQL support to the power of Apache Pinot's realtime analytics, giving you the best of both worlds.
SAP Fiori is a collection of apps that provide a simple user experience for commonly used SAP functions across devices. It includes 25 apps for managers, employees, purchasing agents and sales representatives. SAP developed SAP Fiori based on feedback from over 250 customers to make the most frequent business tasks easy to use like consumer apps. Initial customer feedback on SAP Fiori has been very positive.
Technical Overview of CDS View – SAP HANA Part IAshish Saxena
SAP HANA has introduced new paradigms to SAP ABAP application programming. Before SAP HANA, development paradigm in SAP was based on DATA-to-Code where intensive calculation was done at the application layer and database utilization was minimized. New programming paradigm in SAP HANA is Code-to-Data, where the intensive calculation is done at the database layer and less programming at application layer.
Data ingestion and distribution with apache NiFiLev Brailovskiy
In this session, we will cover our experience working with Apache NiFi, an easy to use, powerful, and reliable system to process and distribute a large volume of data. The first part of the session will be an introduction to Apache NiFi. We will go over NiFi main components and building blocks and functionality.
In the second part of the session, we will show our use case for Apache NiFi and how it's being used inside our Data Processing infrastructure.
The document compares the query execution plans produced by Apache Hive and PostgreSQL. It shows that Hive's old-style execution plans are overly verbose and difficult to understand, providing many low-level details across multiple stages. In contrast, PostgreSQL's plans are more concise and readable, showing the logical query plan in a top-down manner with actual table names and fewer lines of text. The document advocates for Hive to adopt a simpler execution plan format similar to PostgreSQL's.
This presentation describes how to efficiently load data into Hive. I cover partitioning, predicate pushdown, ORC file optimization and different loading schemes
This document discusses BADIs, enhancements, and differences between user exits and BADIs in SAP. It also provides details about enhancement spots, which are used to manage explicit enhancement options and can carry information about positions where enhancements were created. The document concludes with explanations of calling transactions via BDC tables and using session methods in BDC for processing large amounts of data asynchronously.
Core Data Services (CDS) is a next generation data modeling tool introduced by SAP to define data models on the database server rather than the application server for SAP HANA applications. CDS offers conceptual modeling, built-in functions, and extensions beyond traditional data modeling. Originally only available in SAP HANA, CDS can now also be implemented in SAP NetWeaver AS ABAP. CDS includes data definition, query, manipulation, and control languages and allows a code-to-data paradigm where data models are defined as views in the ABAP dictionary and consumed via Open SQL.
In this session, Neil Avery covers the planning and operation of your KSQL deployment, including under-the-hood architectural details. You will learn about the various deployment models, how to track and monitor your KSQL applications, how to scale in and out and how to think about capacity planning. This is part 3 out of 3 in the Empowering Streams through KSQL series.
This document discusses cost-based query optimization in Apache Hive. It provides background on the author and describes how cost-based optimization is being incrementally introduced in Hive using the Apache Optiq query planning framework. The initial focus is on join reordering using join cardinality costing. Examples are provided showing performance improvements on TPC-DS queries when the cost-based optimizer is used to reorder joins compared to the rule-based optimizer in Hive. Future work is needed to support more query constructs and scale to larger queries.
Accelerate Your Apache Spark with Intel Optane DC Persistent MemoryDatabricks
The capacity of data grows rapidly in big data area, more and more memory are consumed either in the computation or holding the intermediate data for analytic jobs. For those memory intensive workloads, end-point users have to scale out the computation cluster or extend memory with storage like HDD or SSD to meet the requirement of computing tasks. For scaling out the cluster, the extra cost from cluster management, operation and maintenance will increase the total cost if the extra CPU resources are not fully utilized. To address the shortcoming above, Intel Optane DC persistent memory (Optane DCPM) breaks the traditional memory/storage hierarchy and scale up the computing server with higher capacity persistent memory. Also it brings higher bandwidth & lower latency than storage like SSD or HDD. And Apache Spark is widely used in the analytics like SQL and Machine Learning on the cloud environment. For cloud environment, low performance of remote data access is typical a stop gap for users especially for some I/O intensive queries. For the ML workload, it's an iterative model which I/O bandwidth is the key to the end-2-end performance. In this talk, we will introduce how to accelerate Spark SQL with OAP (https://github.com/Intel-bigdata/OAP) to accelerate SQL performance on Cloud to archive 8X performance gain and RDD cache to improve K-means performance with 2.5X performance gain leveraging Intel Optane DCPM. Also we will have a deep dive how Optane DCPM for these performance gains.
Speakers: Cheng Xu, Piotr Balcer
As an ABAP Developer, we often have to develop ABAP reports that displays some data from the database. SAP provides a set of ALV (ABAP List Viewer) function modules which can be put into use to embellish the output of a report. Object oriented ALV is more robust and is more advanced when compared to Traditional ALV.
Technical Overview of CDS View - SAP HANA Part IIAshish Saxena
It is very important that a developer understands that technically, CDS is an enhancement of SQL which provides a Data Definition Language (DDL) for defining semantically rich database tables/views (CDS entities) and user-defined types in the database. Unlike the SAP HANA CDS, ABAP CDS are independent of the database system. The entities of the models defined in ABAP CDS provide enhanced access functions compared with existing database tables and views defined in ABAP Dictionary, making it possible to optimize Open SQL-based applications. And it is because of these unparalleled advantages that ABAP CDS is the most preferred form of methodology when it comes to Code to Data paradigm.
This document provides an overview of Hive and its performance capabilities. It discusses Hive's SQL interface for querying large datasets stored in Hadoop, its architecture which compiles SQL queries into MapReduce jobs, and its support for SQL semantics and datatypes. The document also covers techniques for optimizing Hive performance, including data abstractions like partitions, buckets and skews. It describes different join strategies in Hive like shuffle joins, broadcast joins and sort-merge bucket joins and how they are implemented in MapReduce. The overall presentation aims to explain how Hive provides scalable SQL processing for big data.
This document discusses different types of ABAP reports including listing reports, drill-down reports, control break reports, and ALV reports. It provides examples of how to create control break reports using the AT FIRST, AT NEW, AT END OF, and AT LAST statements. It also summarizes how to display internal tables using the ALV grid and list viewers, including examples of using field catalogs to customize the display.
This document discusses analyzing OpenLDAP logs with the ELK stack. It provides an overview of Elasticsearch, Logstash, and Kibana. It describes the format of OpenLDAP logs, including the types of information logged at different log levels. It then outlines how to configure Logstash to parse OpenLDAP logs using Grok patterns and output to Elasticsearch. Finally, it discusses how the data can be visualized and queried in Kibana.
ای ار پی های از دو جنبه تکنیکال و فانکشنال قابل بررسی و کار هستند ، ، یک متخصص فانکشنال خوب بودن راهی جز اشنایی با جنبه های تکنیکال ای ار پی نخواهد داشت ، این یک فرضیه نیست به تجربه با آن رسیده اند و رسیده ام .
This presentation is the definitive list of literally every possible technique for making tempdb faster. It's been run at multiple events around the world and it keeps getting bigger and better.
ASE 15.5 introduced semantic partitions as a premium feature. Follow two real-world case studies and learn what the gotchas are, and implications for your data model and application SQL.
This session will take a look at when/how query optimization takes place, the resources used for query optimization, the role of index statistics and common application query problems (other than simplistic missing indexes) that lead to DBA’s assuming there is a query optimization issue.
Real-time Analytics with Trino and Apache PinotXiang Fu
Trino summit 2021:
Overview of Trino Pinot Connector, which bridges the flexibility of Trino's full SQL support to the power of Apache Pinot's realtime analytics, giving you the best of both worlds.
SAP Fiori is a collection of apps that provide a simple user experience for commonly used SAP functions across devices. It includes 25 apps for managers, employees, purchasing agents and sales representatives. SAP developed SAP Fiori based on feedback from over 250 customers to make the most frequent business tasks easy to use like consumer apps. Initial customer feedback on SAP Fiori has been very positive.
Technical Overview of CDS View – SAP HANA Part IAshish Saxena
SAP HANA has introduced new paradigms to SAP ABAP application programming. Before SAP HANA, development paradigm in SAP was based on DATA-to-Code where intensive calculation was done at the application layer and database utilization was minimized. New programming paradigm in SAP HANA is Code-to-Data, where the intensive calculation is done at the database layer and less programming at application layer.
Data ingestion and distribution with apache NiFiLev Brailovskiy
In this session, we will cover our experience working with Apache NiFi, an easy to use, powerful, and reliable system to process and distribute a large volume of data. The first part of the session will be an introduction to Apache NiFi. We will go over NiFi main components and building blocks and functionality.
In the second part of the session, we will show our use case for Apache NiFi and how it's being used inside our Data Processing infrastructure.
The document compares the query execution plans produced by Apache Hive and PostgreSQL. It shows that Hive's old-style execution plans are overly verbose and difficult to understand, providing many low-level details across multiple stages. In contrast, PostgreSQL's plans are more concise and readable, showing the logical query plan in a top-down manner with actual table names and fewer lines of text. The document advocates for Hive to adopt a simpler execution plan format similar to PostgreSQL's.
This presentation describes how to efficiently load data into Hive. I cover partitioning, predicate pushdown, ORC file optimization and different loading schemes
This document discusses BADIs, enhancements, and differences between user exits and BADIs in SAP. It also provides details about enhancement spots, which are used to manage explicit enhancement options and can carry information about positions where enhancements were created. The document concludes with explanations of calling transactions via BDC tables and using session methods in BDC for processing large amounts of data asynchronously.
Core Data Services (CDS) is a next generation data modeling tool introduced by SAP to define data models on the database server rather than the application server for SAP HANA applications. CDS offers conceptual modeling, built-in functions, and extensions beyond traditional data modeling. Originally only available in SAP HANA, CDS can now also be implemented in SAP NetWeaver AS ABAP. CDS includes data definition, query, manipulation, and control languages and allows a code-to-data paradigm where data models are defined as views in the ABAP dictionary and consumed via Open SQL.
In this session, Neil Avery covers the planning and operation of your KSQL deployment, including under-the-hood architectural details. You will learn about the various deployment models, how to track and monitor your KSQL applications, how to scale in and out and how to think about capacity planning. This is part 3 out of 3 in the Empowering Streams through KSQL series.
This document discusses cost-based query optimization in Apache Hive. It provides background on the author and describes how cost-based optimization is being incrementally introduced in Hive using the Apache Optiq query planning framework. The initial focus is on join reordering using join cardinality costing. Examples are provided showing performance improvements on TPC-DS queries when the cost-based optimizer is used to reorder joins compared to the rule-based optimizer in Hive. Future work is needed to support more query constructs and scale to larger queries.
Accelerate Your Apache Spark with Intel Optane DC Persistent MemoryDatabricks
The capacity of data grows rapidly in big data area, more and more memory are consumed either in the computation or holding the intermediate data for analytic jobs. For those memory intensive workloads, end-point users have to scale out the computation cluster or extend memory with storage like HDD or SSD to meet the requirement of computing tasks. For scaling out the cluster, the extra cost from cluster management, operation and maintenance will increase the total cost if the extra CPU resources are not fully utilized. To address the shortcoming above, Intel Optane DC persistent memory (Optane DCPM) breaks the traditional memory/storage hierarchy and scale up the computing server with higher capacity persistent memory. Also it brings higher bandwidth & lower latency than storage like SSD or HDD. And Apache Spark is widely used in the analytics like SQL and Machine Learning on the cloud environment. For cloud environment, low performance of remote data access is typical a stop gap for users especially for some I/O intensive queries. For the ML workload, it's an iterative model which I/O bandwidth is the key to the end-2-end performance. In this talk, we will introduce how to accelerate Spark SQL with OAP (https://github.com/Intel-bigdata/OAP) to accelerate SQL performance on Cloud to archive 8X performance gain and RDD cache to improve K-means performance with 2.5X performance gain leveraging Intel Optane DCPM. Also we will have a deep dive how Optane DCPM for these performance gains.
Speakers: Cheng Xu, Piotr Balcer
As an ABAP Developer, we often have to develop ABAP reports that displays some data from the database. SAP provides a set of ALV (ABAP List Viewer) function modules which can be put into use to embellish the output of a report. Object oriented ALV is more robust and is more advanced when compared to Traditional ALV.
Technical Overview of CDS View - SAP HANA Part IIAshish Saxena
It is very important that a developer understands that technically, CDS is an enhancement of SQL which provides a Data Definition Language (DDL) for defining semantically rich database tables/views (CDS entities) and user-defined types in the database. Unlike the SAP HANA CDS, ABAP CDS are independent of the database system. The entities of the models defined in ABAP CDS provide enhanced access functions compared with existing database tables and views defined in ABAP Dictionary, making it possible to optimize Open SQL-based applications. And it is because of these unparalleled advantages that ABAP CDS is the most preferred form of methodology when it comes to Code to Data paradigm.
This document provides an overview of Hive and its performance capabilities. It discusses Hive's SQL interface for querying large datasets stored in Hadoop, its architecture which compiles SQL queries into MapReduce jobs, and its support for SQL semantics and datatypes. The document also covers techniques for optimizing Hive performance, including data abstractions like partitions, buckets and skews. It describes different join strategies in Hive like shuffle joins, broadcast joins and sort-merge bucket joins and how they are implemented in MapReduce. The overall presentation aims to explain how Hive provides scalable SQL processing for big data.
This document discusses different types of ABAP reports including listing reports, drill-down reports, control break reports, and ALV reports. It provides examples of how to create control break reports using the AT FIRST, AT NEW, AT END OF, and AT LAST statements. It also summarizes how to display internal tables using the ALV grid and list viewers, including examples of using field catalogs to customize the display.
This document discusses analyzing OpenLDAP logs with the ELK stack. It provides an overview of Elasticsearch, Logstash, and Kibana. It describes the format of OpenLDAP logs, including the types of information logged at different log levels. It then outlines how to configure Logstash to parse OpenLDAP logs using Grok patterns and output to Elasticsearch. Finally, it discusses how the data can be visualized and queried in Kibana.
ای ار پی های از دو جنبه تکنیکال و فانکشنال قابل بررسی و کار هستند ، ، یک متخصص فانکشنال خوب بودن راهی جز اشنایی با جنبه های تکنیکال ای ار پی نخواهد داشت ، این یک فرضیه نیست به تجربه با آن رسیده اند و رسیده ام .
This presentation is the definitive list of literally every possible technique for making tempdb faster. It's been run at multiple events around the world and it keeps getting bigger and better.
ASE 15.5 introduced semantic partitions as a premium feature. Follow two real-world case studies and learn what the gotchas are, and implications for your data model and application SQL.
This session will take a look at when/how query optimization takes place, the resources used for query optimization, the role of index statistics and common application query problems (other than simplistic missing indexes) that lead to DBA’s assuming there is a query optimization issue.
This document discusses the migration of the Eclipse project from CVS to Git. It describes some of the lessons learned during the process, including setting policies to prevent non-fast-forward merges and reflog retention to prevent history rewrites. Best practices for Git usage are also outlined, such as automatic rebase setup and integration branches. Legal notices for trademarks are also provided.
Configuring and using SIDB for ASE CE SP130SAP Technology
This session will discuss and demonstrate how to configure ASE CE for HA operations with SP130. An ASE Cluster Edition overview will be provided and basic ASE CE features will also be discussed.
A small group of DBAs with over 100 combined years of DBA experience discuss the characteristics of being a DBA, as well as the SAP-ASE specific tasks required.
Leveraging SAP ASE Workload Analyzer to optimize your database environmentSAP Technology
New functionality in SAP ASE allows you to capture, analyze and replay a production workload in a non-disruptive manner. Learn how to capture a workload on your production system, analyze the characteristics of the workload, and then replay it on a testing environment. See how quickly you can analyze the impact of changes in configuration parameters on application performance. Use real-life scenarios to determine the optimal configuration for your database.
Here are the key steps for sub-capacity licensing:
1. Install the SySAM sub-capacity license utility (sysamcap) on the physical machine.
2. Run sysamcap to discover the virtual environment and generate an XML file describing the virtual environment topology.
3. Upload the XML file to the SAP Service Marketplace to generate sub-capacity license files.
4. Install the sub-capacity license files on the physical machine.
5. Install and configure the SySAM license server.
6. Install SAP Replication Server and other products. The products will check out licenses from the license server based on the number and type of virtual CPUs used.
See
New from SAP Sybase Replication Server: We now offer support for
• Real-time transactional replication into SAP HANA for non-SAP applications
• Real-time change data capture (CDC) for SAP Data Services
• Disaster recovery for SAP Business Suite applications running on SAP Sybase Adaptive Server Enterprise (SAP Sybase ASE)
• Plus, great new features for current customers
Learn about these new options and other features for existing SAP Sybase Replication Server customers during our Webcast. We’ll show you how the new version of SAP Sybase Replication Server replicates transactional data for non-SAP applications in real-time directly into SAP HANA – without slowing or disrupting the systems that are running the business – to create a real real-time analytics solution.
And for customers running SAP Business Suite on SAP Sybase ASE, we now have a solution that reduces both planned and unplanned downtime using SAP Sybase Replication Server.
Take advantage of this opportunity to learn how your organization can build a better real-time analytics solution and better protect your SAP Business Suite data.
Storage Optimization and Operational Simplicity in SAP Adaptive Server Enter...SAP Technology
This presentation will discuss the key storage optimization and operational simplicity features available in SAP ASE and introduce enhancements such as heat map providing the capability to move data to high/low performing storage devices based on access patterns.
Top 10 production support manager interview questions and answerstonychoper0506
This document provides materials and advice for interview questions and answers for a production support manager position. It includes sample answers for common interview questions, such as why the applicant wants the job, what challenges they are looking for, and what they have learned from past mistakes. The document emphasizes showing passion for the role, linking experiences to the job requirements, and researching the company beforehand.
This document provides an overview of the Hadoop MapReduce Fundamentals course. It discusses what Hadoop is, why it is used, common business problems it can address, and companies that use Hadoop. It also outlines the core parts of Hadoop distributions and the Hadoop ecosystem. Additionally, it covers common MapReduce concepts like HDFS, the MapReduce programming model, and Hadoop distributions. The document includes several code examples and screenshots related to Hadoop and MapReduce.
An In-Depth Look at SAP SQL Anywhere Performance FeaturesSAP Technology
This presentation examines the internal mechanism behind SQL Anywhere’s self-management and self-tuning functionality. Topics covered will illustrate how the various performance features, such as server cache management, self-healing statistics and dynamic multiprogramming level, work in concert to provide a robust data management solution in zero-administration environments.
SAP performance testing & engineering courseware v01Argos
This document provides an overview of planning best practices for SAP performance testing. It discusses defining testing objectives, deriving workload models by identifying critical transactions, managing test data requirements, setting up test environments, and identifying stakeholders and expectations. The planning process involves working with various project teams to gather requirements, analyze performance needs, classify critical business processes, and develop a testing methodology and process aligned with the project's SDLC. Key activities include setting expectations, scoping the test, defining quantitative performance metrics, and developing a business transaction profile mapping processes to automated scripts.
Maximizing Database Tuning in SAP SQL AnywhereSAP Technology
This session illustrates the different tools available in SQL Anywhere to analyze performance issues, as well as describes the most common types of performance problems encountered by database developers and administrators. We also take a look at various tips and techniques that will help boost the performance of your SQL Anywhere database.
Consolidation, cloud privé, cloud public, SQL As A Service etc. sont autant de scénarios de virtualisation possibles avec SQL Server. Cette session reposera les règles de bon usage de ce type de déploiement et les scénarios clés. Nous reviendrons sur quelques-unes des « Lessons learned from Azure ».
Consolidation, cloud privé, cloud public, SQL As A Service etc. sont autant de scénarios de virtualisation possibles avec SQL Server. Cette session reposera les règles de bon usage de ce type de déploiement et les scénarios clés. Nous reviendrons sur quelques-unes des « Lessons learned from Azure ».
This presentation provides a brief overview of APM solutions for the Azure cloud computing platform. We identify three challenges unique to cloud computing which APM can address, and we summarize which APM techniques can be applied in IaaS, PaaS, and SaaS application architectures. To illustrate APM techniques for IaaS and PaaS we look at a variety APM offers in the Azure marketplace, including Riverbed AppInternals, Microsoft Application Insights, and NewRelic. To illustrate APM techniques for SaaS, we look at how SharePoint Online can be instrumented using JavaScript injection. This presentation was prepared and delivered by Ian Downard to the Portland Azure User Group on March 28th, 2016.
Why and How to Monitor App Performance in AzureIan Downard
This presentation provides a brief overview of APM solutions for the Azure cloud computing platform. We discuss three challenges unique to cloud computing which APM can address, and we summarize which APM techniques can be applied in IaaS, PaaS, and SaaS application architectures. To illustrate APM techniques for IaaS and PaaS we look at a variety APM offers in the Azure marketplace, including Riverbed AppInternals, Microsoft Application Insights, and New Relic. To illustrate APM techniques for SaaS, we look at how SharePoint Online can be instrumented using JavaScript injection. This presentation was prepared and delivered by Ian Downard to the Portland Azure User Group on March 28th, 2016, in Portland Oregon.
This lecture will include examples of how to use some of SQL Anywhere's lesser known but incredibly powerful functionality to deliver applications that can do more for your customers. Features discussed will include the SQL Anywhere http server, using external environments to call Java/C++/.Net libraries, materialized views, and more.
Predicting When Your Applications Will Go Off the Rails! Managing DB2 Appli...CA Technologies
This document discusses using analytics to manage and predict performance issues with DB2 applications. It begins with discussing the challenges DBAs face in maintaining optimal DB2 performance as applications increase in complexity. The solution presented is CA Performance Analytics for DB2, which uses existing performance data from CA Detector to create baselines and monitor for deviations that could indicate upcoming problems. It analyzes trends to provide early warnings. Use cases demonstrate how it can monitor key applications and service levels, provide event details to troubleshoot issues, and help re-establish baselines when application changes occur. The goal is proactive monitoring to avoid problems before they impact users.
Building ISV Applications that run in the cloud with SQL Anywhere On-Demand E...SAP Technology
This lecture discusses how ISVs in the traditional shrink-wrapped software market can move to the cloud and provide SaaS solutions to their customers with SQL Anywhere on-demand edition providing their data managment platform. We discuss some of the reasons why an ISV and their customers would want to move to a SaaS model, some of the barriers that make this difficult for established ISV's and demonstrate how with SQL Anywhere on-demand edition ISVs can build, deploy, and manage cloud applications.
Db2 update day 2015 managing db2 with ibm db2 tools svenn aagePeter Schouboe
This document discusses JN Data's consolidation of DB2 administration tools. It summarizes:
1) JN Data consolidated its DBA tools from 3 vendors to 1 to reduce costs, while maintaining service levels and supporting new DB2 releases.
2) The consolidation process took 12 months and involved transitioning existing homegrown solutions that built on the prior tools to use the new vendor's product instead.
3) JN Data analyzed vendors' abilities to support current and future DB2 features, integrate with existing solutions, and perform required tasks to choose a replacement product.
Tech Talk: Five Simple Steps to a More Powerful Database ExperienceCA Technologies
This interactive discussion is for customers who have recently upgraded, are currently upgrading or considering upgrading to the most current production release. As technology evolves, the way people work continues to change. To make an impact, you need to think creatively and innovate boldly. Join us to hear about five simple steps towards a more powerful CA IDMS™ and CA Datacom® mainframe database experience—that help to improve performance, scalability, platform support, standards compliance and usability.
For more information, please visit http://cainc.to/Nv2VOe
This presentatio highlights the enhancements planned for SAP ASE to support multi-tenancy and other requirements for cloud deployments. It will also highlight capabilities to support database-as-a-service, along with how SAP ASE can be used with popular solutions provided by infrastructure-as-a-service providers.
Adaptive Server Farms for the Data Centerelliando dias
The document discusses adaptive server farms for data centers. It addresses challenges like inefficient utilization, overprovisioning, and high costs. It proposes pooling server resources, automating management, and dynamically allocating resources based on demand. This improves utilization and reduces costs through automation, load balancing, and continuous service availability.
"Architecture assessment from classics to details", Dmytro OvcharenkoFwdays
We will talk about architecture assessment and SEI ATAM methodology in detail. We also review Quality Attribute Workshop on a high level and find differences between quantitative and qualitative analysis. The assessment process can be represented as a set of activities roughly split into assessment preparation, collection of the important data and stakeholders' inputs, architecture analysis, and, finally, presentation of findings and recommendations. Finally, we will review the assessment document and some examples.
Best Practice for Supercharging CA Workload Automation dSeries (DE) for Optim...CA Technologies
This session will discuss utilizing CA Workload Automation dSeries (DE) 11.3 and the new r12 features and enhancements to ensure that you can maximize the throughput of your applications. Best practice scheduling techniques will be discussed and shown.
For more information, please visit http://cainc.to/Nv2VOe
OOW15 - Getting Optimal Performance from Oracle E-Business Suitevasuballa
This packed Oracle development session summarizes practical tips and lessons learned from performance tuning and benchmarking the world’s largest Oracle E-Business Suite environments. Application system administrators will get concrete tips and techniques for identifying and resolving performance bottlenecks on all layers of the technology stack. They will also learn how Oracle’s engineered systems, such as Oracle Exadata and Oracle Exalogic, can dramatically improve the performance of their system.
OOW16 - Getting Optimal Performance from Oracle E-Business Suite [CON6711]vasuballa
This Oracle Development session summarizes practical tips and lessons learned from performance tuning and benchmarking the world’s largest Oracle E-Business Suite environments. Application system administrators will get concrete tips and techniques for identifying and resolving performance bottlenecks on all layers of the technology stack. They will also learn how Oracle’s engineered systems such as Oracle Exadata and Oracle Exalogic can dramatically improve the performance of their system
It's Not a Dream—Conquer Chaos for Your DB2® for z/OS® Optimization NightmaresCA Technologies
Database optimization tasks must be continually performed to help improve performance, scalability, platform support, standards compliance and usability while simultaneously reducing costs and risk. Have difficulty building, testing and deploying new DB2® database functions and performance? Learn more about a number of new enhancements for your DB2 database management tools from CA Technologies.
For more information, please visit http://cainc.to/Nv2VOe
The document discusses SAP Integration Suite and its role in connecting enterprises and enabling innovation. Some key points:
- SAP Integration Suite is the foundation for agility and innovation by connecting systems, processes, and experiences across enterprises and ecosystems.
- It provides pre-built integration content and tools to accelerate integration projects, empower business users with self-service integration, and customize omnichannel experiences.
- Case studies showcase how companies have used SAP Integration Suite to boost digital sales, streamline processes, enhance the customer experience, and gain real-time insights.
Future-Proof Your Business Processes by Automating SAP S/4HANA processes with...SAP Technology
Automating repetitive manual business processes enables your employees to focus on key business priorities, reduce manual errors, and improve customer satisfaction. Learn how SAP Intelligent Robotic Process Automation helps you automate SAP S/4HANA business processes with pre-built bots and comprehensive development toolset.
7 Top Reasons to Automate Processes with SAP Intelligent Robotic Processes Au...SAP Technology
This document discusses how SAP Intelligent RPA can be used to automate SAP S/4HANA processes. It highlights top reasons to automate processes with SAP Intelligent RPA, including best-in-class integration with SAP applications, fully attended and unattended RPA capabilities, and comprehensive bot development tools. The document provides an example of how one company used SAP Intelligent RPA to automate tasks like auto-uploading documents and improving order closing, reducing time spent on those processes.
Extend SAP S/4HANA to deliver real-time intelligent processesSAP Technology
Organizations are facing challenges with complex data landscapes that limit their ability to gain insights and act with agility. SAP's Business Technology Platform helps address these challenges by enabling organizations to connect and share all their data, deliver real-time insights, and build intelligent apps and processes. The platform includes solutions for data management, analytics, application development and integration, as well as intelligent technologies. It provides the tools and capabilities to transform data into business value and realize an intelligent enterprise.
Process optimization and automation for SAP S/4HANA with SAP’s Business Techn...SAP Technology
This document discusses process optimization and automation for SAP S/4HANA. It provides statistics on the benefits of robotic process automation (RPA) and process mining. These include up to a 30% increase in productivity and a 50% reduction in data footprint. The document discusses tools for integration, process management, automation and monitoring from SAP that can be used to streamline, optimize and automate SAP S/4HANA processes. These include SAP Cloud Platform Integration Suite, SAP Intelligent Business Process Management, SAP Process Orchestration, SAP Intelligent RPA and SAP Process Mining by Celonis. Use cases and examples of automating SAP S/4HANA processes
Accelerate your journey to SAP S/4HANA with SAP’s Business Technology PlatformSAP Technology
Best run companies are already benefiting from SAP S/4HANA. Move to SAP S/4HANA quickly with confidence. SAP EIM and SAP Cloud Platform capabilities help you migrate data, curate, and govern master data, archive unused data, move custom code. With performance, functional, and security test tools, customers can move to SAP S/4HANA with confidence.
Accelerate Your Move to an Intelligent Enterprise with SAP Cloud Platform and...SAP Technology
Manage rapidly evolving business processes with greater ease. Minimize complex, time-consuming updates. Reduce high maintenance costs. Discover how enterprise IT teams are conquering the challenges of migrating from legacy ERP systems and innovating faster with SAP Cloud Platform and SAP S/4HANA.
Transform your business with intelligent insights and SAP S/4HANASAP Technology
Extend SAP S/4HANA with SAP’s Business Technology Platform to enable digital transformation through data-driven intelligence and process innovation. Learn how SAP’s Business Technology Platform delivers unprecedented opportunities for business process innovation.
SAP Cloud Platform for SAP S/4HANA: Accelerate your move to an Intelligent En...SAP Technology
Keep SAP S/4HANA clean to deliver a stable digital core and accelerate innovations. Leverage SAP Cloud Platform for SAP S/4HANA integration with SAP and non-SAP applications, side-by-side extensions, and innovations so that you can accelerate your move to SAP S/4HANA, deliver a stable S/4HANA system, and offer an agile development.
Innovate collaborative applications with SAP Jam Collaboration & SAP Cloud Pl...SAP Technology
Learn how SAP Jam Collaboration together with SAP Cloud Platform helps you to transform your existing applications by enabling collaboration inside applications and allows you to innovate next-generation collaborative applications. SAP Jam Collaboration also simplifies application development by providing a single platform to brainstorm, collaborate, share documents and track progress and streamline application roll-out and support by providing a place to share training material and self-service support forums. With SAP Jam Collaboration, you can also build a modern intranet with collaboration capabilities which is easy to update and accessible from mobile and outside corporate network. You can simplify IT landscape by consolidating software solutions such as document storage systems, blogging platform, wikis, collaboration platform and real-time messaging tools using SAP Jam Collaboration.
The document discusses how consumer expectations are changing and companies must adapt their business models to engage with each consumer individually through personalized and authentic relationships. It identifies collecting more consumer and market data from various sources, analyzing big data in real-time, and using technologies like IoT to enrich consumer experiences as important technology priorities for consumer companies. SAP Leonardo and its cloud and IoT capabilities can help companies with digital transformation, gain new consumer insights, and accelerate their business in order to engage digital consumers.
The IoT Imperative for Discrete Manufacturers - Automotive, Aerospace & Defen...SAP Technology
The document discusses how digital transformation through the Internet of Things (IoT) is disrupting discrete manufacturing industries. It recommends that manufacturers innovate by focusing on creating business value with IoT, investing in key digital scenarios, and integrating new technologies. IoT capabilities around connectivity, data management, analytics and cloud technology can help manufacturers improve quality, strategically manage assets, increase efficiency and engage customers.
IoT is Enabling a New Era of Shareholder Value in Energy and Natural Resource...SAP Technology
The document discusses how Internet of Things (IoT) technology can help energy and natural resource companies improve operations and drive innovation. Specifically, it explains that IoT allows these companies to do more with the sensor data generated by field assets by integrating it with cloud, analytics, and machine learning capabilities. This convergence of operations data with real-time business information through enhanced IoT capabilities allows workers to make better decisions. The document promotes SAP Leonardo's IoT capabilities for connecting sensor networks with core applications to optimize asset usage, minimize costs, and improve workforce safety in the energy and natural resources industry.
The IoT Imperative in Government and HealthcareSAP Technology
The document discusses how public service organizations can use Internet of Things (IoT) technologies to address challenges around aging populations, urbanization, and changing expectations. It notes that constrained budgets and increasing demands are stressing governments. The document advocates that organizations pursue digital transformation and use IoT to improve service efficiency, enhance smart cities, transform defense/security, and advance healthcare outcomes rather than just service volume. SAP Leonardo IoT capabilities can help organizations leverage big data and analytics to seize opportunities in these areas.
This presentation talks about how SAP S/4HANA can empower finance to strategically guide your business evolution via instant insights, intuitive user experience, and a flexible non-disruptive platform.
Five Reasons To Skip SAP Suite on HANA and Go Directly to SAP S/4HANASAP Technology
Don’t burn extra resources. Read on to discover five reasons to fly direct to SAP S/4HANA instead of adding a layover at SAP Suite on Hana (SoH) on the way.
SAP Helps Reduce Silos Between Business and Spatial DataSAP Technology
Discover how spatial solutions from SAP can help your business leverage geographic and spatial data to deliver location intelligence, increase insight, and improve efficiency. Solutions include SAP HANA, SAP BusinessObjects Analytics, SAP Geographical Enablement Framework, SAP GEO.e, Galigeo.
This presentation is a supplement to the "Why SAP HANA?" video on Youtube. Download and follow along to the video. An added bonus to this presentation is an Appendix and Presentation Notes slides.
Video Link: https://www.youtube.com/watch?v=VCEr9Y8ZrVQ
Spotlight on Financial Services with Calypso and SAP ASESAP Technology
Capital markets solutions from Calypso and SAP ASE can help your organization reduce the total number of systems in use, simplify business architecture, streamline processes, and improve efficiency – all while reducing total cost of ownership.
Spark Usage in Enterprise Business OperationsSAP Technology
At Spark Summit East 2016, SAP’s Ken Tsai highlighted how SAP HANA Vora extends Apache Spark to provide OLAP modeling capabilities and real-time query federation to enterprise data. You will learn real-world use cases where instant insight from a combination of enterprise and Hadoop data make an impact on everyday business operations.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Infrastructure Challenges in Scaling RAG with Custom AI modelsZilliz
Building Retrieval-Augmented Generation (RAG) systems with open-source and custom AI models is a complex task. This talk explores the challenges in productionizing RAG systems, including retrieval performance, response synthesis, and evaluation. We’ll discuss how to leverage open-source models like text embeddings, language models, and custom fine-tuned models to enhance RAG performance. Additionally, we’ll cover how BentoML can help orchestrate and scale these AI components efficiently, ensuring seamless deployment and management of RAG systems in the cloud.
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
National Security Agency - NSA mobile device best practices
Sybase ASE 15.7- Two Case Studies of Successful Migration
1. (c) 2015 Independent SAP Technical User GroupAnnual Conference, 2015
-ISUG TECH 2015-ISUG TECH 2015
ConferenceConference
ASE 15.7: 2 case studies of successful migrationASE 15.7: 2 case studies of successful migration
(shhhhh… preps floods & sc)(shhhhh… preps floods & sc)
,Andrew Melkonyan Senior DB Architect,Andrew Melkonyan Senior DB Architect
2. (c) 2015 Independent SAP Technical User GroupAnnual Conference, 2015
AgendaAgenda
Welcome
Speaker Introduction
(Session Title add presentation
)title
&Q A
2 [30]
3. (c) 2015 Independent SAP Technical User GroupAnnual Conference, 2015
WelcomeWelcome
, , , , , -ISUG ASUG UKSUG SAUG TechEd D CODE
& .c
• Independent S ( )AP Technical User G ( . . )…roup www isug com
• A ’mericas SAP U ’sers G ( . . )…roup www asug com
• UK ( )and Europe S ( )AP Database and Technology products User
G ( . . )…roup www uksug com
• SAP Australian User G ( . . . )…roup www saug com au
• & - ( , , –SAP TechED d code technologists engineers developers
. .www sapteched co )…m
, …Sybase is dead long live SAP Sybase
• …Most user groups are heavily SAP oriented
• / - - …I UK SUG leads the way with ex Sybase technologies
3 [30]
4. (c) 2015 Independent SAP Technical User GroupAnnual Conference, 2015
:Ecce homo:Ecce homo .Mr Andrew.Mr Andrew
MelkonyanMelkonyan
Over 15 years working with Sybase
( / )ASE RS
… … … …Developer DBA Lead DBA Team Leader
…Consultant
-Ex Ness Employee
- …Official Sybase Products Re seller in Israel Responsible
( - )…for local case management via case express
Passionate for Sybase
:// . . …http andrewmeph wordpress com
:// . . / / . ...http scn sap com people andrew melkonyan
4 [30]
5. (c) 2015 Independent SAP Technical User GroupAnnual Conference, 2015
( . . ) . ( . &Safety Zone 12 5 x vs Futuristic Dreams 15 7
)…beyond
Migration to 15.7 – Motivation
15.7
T5-4
PRDR
IMDRRDR15.7 T5-4
3000+ TPS
PROD
REPIMDB
DRP
64 bit
MS2
M
’
’R
M R
ETL32 bit
12.5.4
M5
M51212
12.5.4 M5
1000 TPS
PROD
DRP
PRDR
32 bit
MS2
REPDRREP
MS1
5 [30]
6. (c) 2015 Independent SAP Technical User GroupAnnual Conference, 2015
Migration to 15.7 – 2 case studies
In the past 2 years I witnessed two reasonably large production ASE servers failing
to migrate from ASE 12.5.x to the pre ASE 15.7 releases of ASE 15:
One upgrade (Customer#1) was performed on an HP Integrity BL890c i2 host with 8 Intel(R)
Itanium(R) Processor 9340s (1.6 GHz), 32 logical processors (4 per socket), 256 GB RAM –
24 Engine ASE, HPUX.
The other upgrade (Customer#2) was performed on an M5000 server with 8 SPARC VII
Processors (2.4 GHz), 32 logical processors (4 per socket), 128 GB RAM – 16 Engine ASE,
Solaris.
Disproportionally high degree of engine utilization coupled with significant drop in
throughput turned each migration attempt into a failure.
6 [30]
7. (c) 2015 Independent SAP Technical User GroupAnnual Conference, 2015
Migration to 15.5 – Cust#1 Profile
ASE 12.5.3: Runs about 300 transactions per second with an average of 25% engine
utilization. The system runs about 600 procedure requests and 1700 statements per
second (Total Rows Affected: ~6.5K, Total Index Scans ~72K, Total Lock Requests
~300K, 1.4 M bytes received/sent per second).
Customer #1: an OLTP server (12.5.3) accessed by a mixture of clients – mostly Power Builder CTLIB software.
7 [30]
8. (c) 2015 Independent SAP Technical User GroupAnnual Conference, 2015
Migration to 15.5 – Cust#1
ASE 15.5: Run about 100 transactions per second with an average of 30% Engine
utilization. The system run ~600 procedure and ~500 statement requests per second.
No apparent problems with ASE configuration. No apparent reasons for slowing down.
TF753 did not help. Migration was aborted.
Early during migration ASE started to exhibit higher than expected engine utilization while the throughput sunk.
8 [30]
9. (c) 2015 Independent SAP Technical User GroupAnnual Conference, 2015
Migration to 15.5 – Cust#1, TSE
Sybase TS initial direction: procedure cache configured too small (12 GB for 24 Engine ASE) & statement cache configured too
large (2 GB). The two together were thought to result in high spinlock contention on ASE resources (SSQLCACHE and
RPROCMGR spinlocks).
9 [30]
10. (c) 2015 Independent SAP Technical User GroupAnnual Conference, 2015
Migration to 15.5 – Cust#2 Profile
# : ( . . ) – ,Customer 2 an OLTP server 12 5 4 accessed by a mixture of clients JDBC BDE and native
.CTLIB software
ASE 12.5.4: Runs about 900 transactions per second with an average of 40% engine utilization.
The system runs approximately 1400 procedure and 800 statement requests per second (Total Rows
Affected: ~15K, Total Index Scans ~65K, Total Lock Requests ~250K, 1 M bytes received/sent per
second).
10
[30]
11. (c) 2015 Independent SAP Technical User GroupAnnual Conference, 2015
Migration to 15.5 – Cust#2
Early during migration ASE started to exhibit extremely high engine utilization while the throughput sunk.
ASE 15.[0|5]: Run about 200 transactions per second with an average of 95% Engine utilization.
The system run ~650 procedure requests per second. Again, no apparent problems with ASE
configuration. No apparent reasons to slow down. TF753 did not help. Migration aborted – twice
(massive code review in-between).
11
[30]
12. (c) 2015 Independent SAP Technical User GroupAnnual Conference, 2015
Migration to 15.5 – Cust#2,TSE
Sybase TS initial direction: procedure cache fragmentation (4 GB for 16 engine ASE).
In the aftermath of work done for Customer #1 it has become clear that here too the
problem is around RPROCMGR (visible neither in regular sp_sysmon invocation nor
through MDAs – at that time).
12
[30]
13. (c) 2015 Independent SAP Technical User GroupAnnual Conference, 2015
Migration to 15.x – Aftermath
Following the migration, new simulators were written by customers teams in order to
reproduce failed migration better. None of the customers succeeded to reproduce
comparable throughput drop.
Only under extreme stress the same degree of spinlock contention or drop in
throughput started to surface.
Narrowing down the checks, it was found that what causes a high degree of
contention is running a high volume of prepared statements simultaneously.
13
[30]
14. (c) 2015 Independent SAP Technical User GroupAnnual Conference, 2015
Migration to 15.x – Aftermath
In order to deconstruct migration failures we had to:
1.Learn the difference in impact of prepared statement on ASE 12.5.x versus ASE 15.x correspondingly.
2.See what is so peculiar about these customers ASE environment from the prepared statement perspective.
3.Learn the right way to handle a high influx of prepared statement calls with the tools available at the DBMS side.
4.
14
[30]
15. (c) 2015 Independent SAP Technical User GroupAnnual Conference, 2015
To Prep or not to Prep, is it a question?
When application code uses prepared statement API both client host and DBMS server host
prepare internal memory structures designed for subsequent reuse. For ASE’s this structure is
a lightweight procedure – or LWP.
Depending on client connection settings ASE may receive and handle:
A.Fully prepared statements (sent via TDS_DYNP).
B.Partially prepared statements (sent via TDS_LANG).
The footprint of each on ASE is different. When these structures are created without the purpose
of being reused and sent en-masse to ASE they may cause substantial damage.
The distribution of prepared statement types may be inspected either with dbcc cis trace flags
or in monSysSQLText (DYNPs are identified as “DYNAMIC_SQL... CREATE PROC…”).
15
[30]
16. (c) 2015 Independent SAP Technical User GroupAnnual Conference, 2015
DYNP impact on ASE 15.x
Consider the table below:
We use three JDBC clients running (fully) prepared statements in a loop. Instead of reusing them in client
code, we just create them and drop right after being used once. All the 15.x versions prior to 15.7 produce
significant spinlock contention and the throughput drops. Only the 15.7 with the “streamline dynamic
SQL” option turned on fixes the issue(the last column: note the change in procedure removal and statement
reuse).
For ASE 12.5.x this was not an issue at all.
16
[30]
17. (c) 2015 Independent SAP Technical User GroupAnnual Conference, 2015
DYNPs are better Streamlined
Given the impact of fully prepared statements on ASE 15.x, the following
recommendations apply for application hitting ASE at high rate with fully prepared
statements:
If the application layer mustuse prepared statement semantics and there is a high volume of
fully prepared statements hitting ASE (generated and dropped at once rather than reused),
run ASE 15.7 ESD#1 or later and turn the “streamlined dynamic SQL” option on.
Earlier versions of ASE 15.x cannot handle high rate of fully prepared statements well. For
earlier ASE 15.x versions dynamic prepared option must be turned off on driver/connection
level (e.g. DYNAMIC_PREPARE = false for JDBC/ODBC client).
17
[30]
18. (c) 2015 Independent SAP Technical User GroupAnnual Conference, 2015
Non-DYNP impact on ASE 15.x
With the statement cache sized properly ASE 15.x handles load generated by partially
prepared statements much better than ASE 12.5.x.
The problem arises only when client code generates high volume of unique SQL
statements forced on the application layer into prepared statement API – again
generated and dropped rather than being reused. In this case, statement cache becomes
inefficient. Statements come in and out of statement cache at high rate causing high
statement cache turnover. Throughput goes down and engine utilization goes up.
In fact, in this case fully and partially prepared statements as well as regular callable
statements have the same negative impact.
18
[30]
19. (c) 2015 Independent SAP Technical User GroupAnnual Conference, 2015
Non-DYNP impact on ASE 15.x
Consider the table below:
We use three JDBC clients running (partially) prepared (or callable) statements in a loop – again with
no reuse: create->execute once->free. We generate unique SQL statements. Statement cache
turnover go up. ASE 15.5 may handle this situation only if the “bad” unique code is executed with
statement cache turned off. ASE 15.7 handles the situation gracefully.
For ASE 12.5.x this was not an issue at all.
19
[30]
20. (c) 2015 Independent SAP Technical User GroupAnnual Conference, 2015
Statement Cache Wonders
Statement cache management in 15.7 has been improved to handle reuse of statements much
more effectively. Still, flooding ASE with unique statements is risky:
If the application layer must use prepared statement logic or there is a high volume of unique
statements streamed into ASE (either due to bad coding or third party application components – e.g.
data-window), ASE 15.7 ESD#1 or later will in most cases deal with it gracefully. However: if the
statement cache will get over-flooded by statements contention will arise to the degree of making ASE
inoperable. In this case it is better to cut them off statement cache altogether.
Earlier ASE 15.x versions cannot handle situation unless this stream of unique code hitting the
statement cache is isolated on the connection level and the statement cache is turned off (e.g., set
statement_cache off in login trigger).
20
[30]
21. (c) 2015 Independent SAP Technical User GroupAnnual Conference, 2015
Cust#1: Migration Fiasco Revisited
Let’s inspect prepared statement situation in 12.5.3 for Customer#1.
We may see here that there is a medium rate of procedure requests. However, there is almost no procedure
removals. Client application here has little or no fully prepared statement calls (confirmed through
monSysSQLText inspection).
In addition, we see ~2000 statement requests per second.
21
[30]
22. (c) 2015 Independent SAP Technical User GroupAnnual Conference, 2015
We have experimented with statement cache size during simulation sessions and found that the greater the cache the lower is engine
utilization – all the way up to 2GB statement cache. So 2GB statement cache by itself did not constitute a problem here (compare
throughput below vs. migration data).
Cust#1: Simulation & SC size
22
[30]
23. (c) 2015 Independent SAP Technical User GroupAnnual Conference, 2015
Cust#1: Potential Issue
The simulation gave us a chance to inspect statement cache distribution. Cache buckets were found to contain 1 to ~2000 hashed
statements in very uneven distribution.
Below is a sample statement from a sc bucket:
What we see here is that this is the
same SQL which is slightly modified
by the PB client data-window
component.
23
[30]
24. (c) 2015 Independent SAP Technical User GroupAnnual Conference, 2015
Cust#1: Migration to 15.7
Customer#1 used a lot of unique statements. Versions of ASE prior to 15.7 could not handle this
influx with SC enabled.
For 15.7 Migration to succeed, we had to monitor SC influx rate closely. High rate of
statement influx is more detrimental to ASE 15.x than it has been back in the days of 12.5.4.
During the final migration tests it has been discovered that not only PB data-window component
generated a huge number of large unique statements that land in statement cache, but the same
code is run by different logins. Statement residing in SC may be reused ONLY if it is invoked
by the same login – unless a specific TF is applied to ASE.
The combination of sizing SC properly + controlling SC influx through forcing ASE to reuse
statements across different logins made the migration possible.
24
[30]
25. (c) 2015 Independent SAP Technical User GroupAnnual Conference, 2015
Cust#2: Migration Fiasco Revisited
Let’s inspect prepared statement situation in 12.5.4 for the Customer#2.
In contrast with Cust#1, Cust#2 has uses fully prepared statement API extensively. It has
also a large number of other statements hitting SC.
25
[30]
26. (c) 2015 Independent SAP Technical User GroupAnnual Conference, 2015
Cust#2: Migration to 15.7
15.7 provided solution for the Customer#2 as well. Streamlined dynamic SQL option
allowed fully prepared statements to be reused rather than discarded.
Here too, we had to monitor SC influx rate closely since high rate of SC turnover is
detrimental to ASE 15.x.
During the migration it has been discovered that there are certain applications that wake up
periodically and flood the SC, bringing about high spinlock contention on SC.
The combination of “streamlined” option and controlling SC influx tightly through
turning off statement cache access for particular logins made the migration possible.
26
[30]
27. (c) 2015 Independent SAP Technical User GroupAnnual Conference, 2015
Migration to 15.7 – A Happy Twin
Both customers have been successfully migrated to ASE 15.7
Customer#2*:
moved from 12.5.4 to 15.7 in July 2013 with a performance boost of over 100% in TPS. There were initial issues around ASE
Procedure Cache which were eventually solved using a combination of ASE settings, different access patterns to ASE Statement
Cache for different logins and custom PC monitoring scripts.
Customer#1**:
moved from 12.5.3 to 15.7 in March 2014. In addition to the pressure on Statement Cache generated by the high volume of large
statements, it was discovered that the situation was aggravated by the fact that ASE generates separate entries in SC for the
same statement run by different logins. Customer has over 1000 logins running the same code. Luckily there is a TF to loosen
SSQL-suid link.
* Migration orchestrated and performed under my lead.
** Migration has been performed with my personal involvement on-site.
27
[30]
28. (c) 2015 Independent SAP Technical User GroupAnnual Conference, 2015
ASE 15.7 Migration – an Aside
In order to prepare/analyze/monitor migration I’ve had to write quite a number of custom
monitoring applications.
You may find some of the tools @ https://andrewmeph.wordpress.com
28
[30]
29. Annual Conference, 2015 (c) 2015 Independent SAP Technical User Group
Questions and AnswersQuestions and Answers
30. Annual Conference, 2015 (c) 2015 Independent SAP Technical User Group
Thank You for AttendingThank You for Attending
Please complete yourPlease complete your
session feedback formsession feedback form