Checkout the latest article by Darryl Griffiths from Aliter Consulting. SAP on Azure Web Dispatcher High Availability provides an overview of how to utilise an Azure Internal Load Balancer in conjunction with the parallel SAP Web Dispatchers to achieve a highly available, load-balanced and scalable solution for fronting SAP Fiori and other SAP components. This deployment is proving very successful on a current SAP Fiori and SAP S/4HANA implementation project for one of our clients.
SAP HANA System Replication (HSR) versus SAP Replication Server (SRS)Gary Jackson MBCS
This document provides information about SAP HANA System Replication (HSR) and compares it to SAP Replication Server (SRS). HSR replicates transaction log entries from a primary HANA database to secondary databases. It supports synchronous and asynchronous replication and can be used for high availability and disaster recovery. The document outlines the initial setup process and ongoing administration of HSR configurations.
This document discusses placing the SAP Application Server Central Services (ASCS) into containers on Kubernetes. It proposes using containers for the ASCS and Enqueue Replication Server (ERS) with anti-affinity rules to ensure high availability without traditional clustering. Benefits include simplified high availability without requiring cluster technology while still providing required features and allowing SAP systems to utilize anonymous compute nodes rather than dedicated hardware. Considerations include licensing and ensuring the Message Server and ERS are never placed on the same node.
How to get the maximum performance from your AEP server. This will discuss ways to improve execution time of short running jobs and how to properly configure the server depending on the expected number of users as well as the average size and duration of individual jobs. Included will be examples of making use of job pooling, Database connection sharing, and parallel subprotocol tuning. Determining when to make use of cluster, grid, or load balanced configurations along with memory and CPU sizing guidelines will also be discussed.
OpenText Archive Center 16.2 Single File Vendor Interface (VI) using Microsoft Azure Storage Account as a storage device is now supported on Linux. Checkout this brief overview of its usage on one of our current projects. Thanks to Manish Shah (Microsoft) for his contribution and working with OpenText to achieve support on Linux, to Supriya Pande for her article on the Microsoft Azure Storage Explorer, to Oleh Khrypko (SAP) for his input to handling disaster recovery on OpenText Archive Center and Gary Jackson (Aliter Consulting) for the article.
1. The document discusses Discngine's Tibco Spotfire Pipeline Pilot connector, which allows graphs stored in Pipeline Pilot to be accessed and visualized in Spotfire.
2. It describes the architecture of the connector and how it executes Pipeline Pilot protocols to generate HTML pages for visualization in Spotfire.
3. Challenges in integrating the large Spotfire API and synchronizing client and server datasets are also discussed.
This document discusses Kafka Streams and provides details on its architecture, features like active/standby tasks, interactive queries, one hop queries, queryable state during restoration, storage policies, rack-aware task allocation, and finite retention in changelog topics. The key points covered include how Kafka Streams applications are deployed in a distributed manner using active and standby tasks, how state can be queried even when tasks are restoring from failures, and various configuration options for storage, fault tolerance, and changelog topics.
Checkout the latest article by Darryl Griffiths from Aliter Consulting. SAP on Azure Web Dispatcher High Availability provides an overview of how to utilise an Azure Internal Load Balancer in conjunction with the parallel SAP Web Dispatchers to achieve a highly available, load-balanced and scalable solution for fronting SAP Fiori and other SAP components. This deployment is proving very successful on a current SAP Fiori and SAP S/4HANA implementation project for one of our clients.
SAP HANA System Replication (HSR) versus SAP Replication Server (SRS)Gary Jackson MBCS
This document provides information about SAP HANA System Replication (HSR) and compares it to SAP Replication Server (SRS). HSR replicates transaction log entries from a primary HANA database to secondary databases. It supports synchronous and asynchronous replication and can be used for high availability and disaster recovery. The document outlines the initial setup process and ongoing administration of HSR configurations.
This document discusses placing the SAP Application Server Central Services (ASCS) into containers on Kubernetes. It proposes using containers for the ASCS and Enqueue Replication Server (ERS) with anti-affinity rules to ensure high availability without traditional clustering. Benefits include simplified high availability without requiring cluster technology while still providing required features and allowing SAP systems to utilize anonymous compute nodes rather than dedicated hardware. Considerations include licensing and ensuring the Message Server and ERS are never placed on the same node.
How to get the maximum performance from your AEP server. This will discuss ways to improve execution time of short running jobs and how to properly configure the server depending on the expected number of users as well as the average size and duration of individual jobs. Included will be examples of making use of job pooling, Database connection sharing, and parallel subprotocol tuning. Determining when to make use of cluster, grid, or load balanced configurations along with memory and CPU sizing guidelines will also be discussed.
OpenText Archive Center 16.2 Single File Vendor Interface (VI) using Microsoft Azure Storage Account as a storage device is now supported on Linux. Checkout this brief overview of its usage on one of our current projects. Thanks to Manish Shah (Microsoft) for his contribution and working with OpenText to achieve support on Linux, to Supriya Pande for her article on the Microsoft Azure Storage Explorer, to Oleh Khrypko (SAP) for his input to handling disaster recovery on OpenText Archive Center and Gary Jackson (Aliter Consulting) for the article.
1. The document discusses Discngine's Tibco Spotfire Pipeline Pilot connector, which allows graphs stored in Pipeline Pilot to be accessed and visualized in Spotfire.
2. It describes the architecture of the connector and how it executes Pipeline Pilot protocols to generate HTML pages for visualization in Spotfire.
3. Challenges in integrating the large Spotfire API and synchronizing client and server datasets are also discussed.
This document discusses Kafka Streams and provides details on its architecture, features like active/standby tasks, interactive queries, one hop queries, queryable state during restoration, storage policies, rack-aware task allocation, and finite retention in changelog topics. The key points covered include how Kafka Streams applications are deployed in a distributed manner using active and standby tasks, how state can be queried even when tasks are restoring from failures, and various configuration options for storage, fault tolerance, and changelog topics.
Spark Streaming provides fault-tolerance through checkpointing and write ahead logs (WAL). Checkpointing saves metadata and generated RDDs to reliable storage to recover from driver failures. WAL saves all received data to log files to enable zero data loss recovery from executor failures. Structured Streaming uses checkpointing for fault-tolerance. Kafka achieves fault-tolerance through replication of partitions across brokers. Flume uses durable file channels and redundant topologies. HDFS replicates blocks across multiple machines. The Lambda architecture handles batch and real-time data through separate batch and speed layers that are merged in the serving layer.
It has just been a few months since the PostgreSQL9.5 is released. We have got some of our customers excited about great new features and performance enhancements in v9.5. But here we are already taking a peak into the next version, and we find it awesome! One of the most awaited features – parallelism makes it to Postgres. The infrastructure for parallelism has been added over last few releases but the first parallel operation in query execution will be seen only in v9.6.
Asynchronous cascading master to multiple replicas
Asynchronous multi-master
Can be used for:
Improved performance for geographically dispersed users
High availability
Load distribution (OLTP vs. reporting)
(ATS6-DEV06) Using Packages for Protocol, Component, and Application DeliveryBIOVIA
Delivering protocols, components, and applications to users and other developers on an AEP server can be very challenging. Accelrys delivers the majority of its AEP services in the form of packages. This talk will discuss the methods that anyone can use to deliver bundled applications in the form of packages and the benefits of doing so. The discussion will include how to create packages, modifying existing packages, deploying packages to servers, and tools that can be used for ensuring the quality of the packages.
This document provides an overview of Apache Apex, an open source unified streaming and fast batching platform. It discusses key aspects of Apex including its application programming model using operators and directed acyclic graphs, native Hadoop integration using YARN and HDFS, partitioning and scaling operators for high throughput, windowing support, fault tolerance, and data locality features. Examples of building a data processing pipeline and its logical and physical plans are also presented.
Managing and Monitoring HANA 2 active:active with System ReplicationLinh Nguyen
Exploring new feature of HANA 2’s system replication -the active:active read-enabled, allowing read-only queries on secondary system’s tables, using the new operation mode ‘logreplay_readaccess’.
Note that logreplay_readaccess does not support Dynamic Tiering
The IT-Conductor monitors both the primary and secondary system.
Ingesting Data from Kafka to JDBC with Transformation and EnrichmentApache Apex
Presenter - Dr Sandeep Deshmukh, Committer Apache Apex, DataTorrent engineer
Abstract:
Ingesting and extracting data from Hadoop can be a frustrating, time consuming activity for many enterprises. Apache Apex Data Ingestion is a standalone big data application that simplifies the collection, aggregation and movement of large amounts of data to and from Hadoop for a more efficient data processing pipeline. Apache Apex Data Ingestion makes configuring and running Hadoop data ingestion and data extraction a point and click process enabling a smooth, easy path to your Hadoop-based big data project.
In this series of talks, we would cover how Hadoop Ingestion is made easy using Apache Apex. The third talk in this series would focus on ingesting unbounded data from Kafka to JDBC with couple of processing operators -Transform and enrichment.
(ATS6-PLAT07) Managing AEP in an enterprise environmentBIOVIA
Deployments can range from personal laptop usage to large enterprise environments. The installer allows both interactive and unattended installations. Key folders include Users for individual data, Jobs for temporary execution data, Shared Public for shared resources, and XMLDB for the database. Logs record job executions, authentication events, and errors. Tools like DbUtil allow backup/restore of data, pkgutil creates packages for application delivery, and regress enables test automation. Planning folder locations and maintenance is important for managing resources in an enterprise environment.
The document discusses database migration from older versions of SQL Server to SQL Server 2012. It covers migration paths, strategies like in-place and side-by-side upgrades. Methods include backup/restore, detach/attach. Pre-migration tasks like backups and checks are important. Application testing and support is also key. Very large databases require special consideration for time and disk space.
Mastering SAP Monitoring - SAP SLT & RFC Connection MonitoringLinh Nguyen
This document discusses monitoring SAP SLT (system replication) and RFC (remote function call) connections. It describes the limited monitoring capabilities available within SAP, including LTR, LTRC, LTRO and Solution Manager. It then introduces two alternative monitoring solutions - the OZSoft SAP Management Pack for Microsoft SCOM, and IT-Conductor's cloud-based application performance management. Both solutions allow for centralized, automated monitoring of SLT replication tables and RFC destinations without requiring SAP-specific software.
Tips on implementing SAP adaptive computing design with SAP LaMa on Microsoft Azure. We discuss the best options for SAP and some of the challenges faced.
BW Migration to HANA Part 3 - Post-processing on the Migrated SystemLinh Nguyen
This series of publication intends to provide an overview and explanation of major steps and considerations for BW on HANA migrations from anyDB (any database). The complex procedure involves:
1) Preparatory work in the BW system
2) SUM DMO Upgrade and Actual migration
3) Post processing on the migrated systems
This part focuses on post-processing, which includes standard tasks after upgrade and HANA-specific post-tasks.
February 2017 HUG: Exactly-once end-to-end processing with Apache ApexYahoo Developer Network
Apache Apex (http://apex.apache.org/) is a stream processing platform that helps organizations to build processing pipelines with fault tolerance and strong processing guarantees. It was built to support low processing latency, high throughput, scalability, interoperability, high availability and security. The platform comes with Malhar library - an extensive collection of processing operators and a wide range of input and output connectors for out-of-the-box integration with an existing infrastructure. In the talk I am going to describe how connectors together with the distributed checkpointing (a mechanism used by the Apex to support fault tolerance and high availability) provide exactly-once end-to-end processing guarantees.
Speakers:
Vlad Rozov is Apache Apex PMC member and back-end engineer at DataTorrent where he focuses on the buffer server, Apex platform network layer, benchmarks and optimizing the core components for low latency and high throughput. Prior to DataTorrent Vlad worked on distributed BI platform at Huawei and on multi-dimensional database (OLAP) at Hyperion Solutions and Oracle.
SAP HANA System Replication - Setup, Operations and HANA MonitoringLinh Nguyen
SAP HANA Distributed System Replication setup, operations and associated HANA Monitoring of Disaster Recovery (DR) scenario using OZSOFT HANA Management Pack for SCOM
Migrating Oracle database to PostgreSQLUmair Mansoob
This document discusses migrating an Oracle database to PostgreSQL. It covers initial discovery of the Oracle database features and data types used. A migration assessment would analyze data type mapping, additional PostgreSQL features, and testing requirements. Challenges include porting PL/SQL code, minimizing downtime during migration, and comprehensive testing of applications on the new PostgreSQL platform. Migrating large data sets and ensuring performance for critical applications are also challenges.
David Yan offers an overview of Apache Apex, a stream processing engine used in production by several large companies for real-time data analytics.
Apache Apex uses a programming paradigm based on a directed acyclic graph (DAG). Each node in the DAG represents an operator, which can be data input, data output, or data transformation. Each directed edge in the DAG represents a stream, which is the flow of data from one operator to another.
As part of Apex, the Malhar library provides a suite of connector operators so that Apex applications can read from or write to various data sources. It also includes utility operators that are commonly used in streaming applications, such as parsers, deduplicators and join, and generic building blocks that facilitate scalable state management and checkpointing.
In addition to processing based on ingression time and processing time, Apex supports event-time windows and session windows. It also supports windowing, watermarks, allowed lateness, accumulation mode, triggering, and retraction detailed by Apache Beam as well as feedback loops in the DAG for iterative processing and at-least-once and “end-to-end” exactly-once processing guarantees. Apex provides various ways to fine-tune applications, such as operator partitioning, locality, and affinity.
Apex is integrated with several open source projects, including Apache Beam, Apache Samoa (distributed machine learning), and Apache Calcite (SQL-based application specification). Users can choose Apex as the backend engine when running their application model based on these projects.
David explains how to develop fault-tolerant streaming applications with low latency and high throughput using Apex, presenting the programming model with examples and demonstrating how custom business logic can be integrated using both the declarative high-level API and the compositional DAG-level API.
Spark Streaming provides fault-tolerance through checkpointing and write ahead logs (WAL). Checkpointing saves metadata and generated RDDs to reliable storage to recover from driver failures. WAL saves all received data to log files to enable zero data loss recovery from executor failures. Structured Streaming uses checkpointing for fault-tolerance. Kafka achieves fault-tolerance through replication of partitions across brokers. Flume uses durable file channels and redundant topologies. HDFS replicates blocks across multiple machines. The Lambda architecture handles batch and real-time data through separate batch and speed layers that are merged in the serving layer.
It has just been a few months since the PostgreSQL9.5 is released. We have got some of our customers excited about great new features and performance enhancements in v9.5. But here we are already taking a peak into the next version, and we find it awesome! One of the most awaited features – parallelism makes it to Postgres. The infrastructure for parallelism has been added over last few releases but the first parallel operation in query execution will be seen only in v9.6.
Asynchronous cascading master to multiple replicas
Asynchronous multi-master
Can be used for:
Improved performance for geographically dispersed users
High availability
Load distribution (OLTP vs. reporting)
(ATS6-DEV06) Using Packages for Protocol, Component, and Application DeliveryBIOVIA
Delivering protocols, components, and applications to users and other developers on an AEP server can be very challenging. Accelrys delivers the majority of its AEP services in the form of packages. This talk will discuss the methods that anyone can use to deliver bundled applications in the form of packages and the benefits of doing so. The discussion will include how to create packages, modifying existing packages, deploying packages to servers, and tools that can be used for ensuring the quality of the packages.
This document provides an overview of Apache Apex, an open source unified streaming and fast batching platform. It discusses key aspects of Apex including its application programming model using operators and directed acyclic graphs, native Hadoop integration using YARN and HDFS, partitioning and scaling operators for high throughput, windowing support, fault tolerance, and data locality features. Examples of building a data processing pipeline and its logical and physical plans are also presented.
Managing and Monitoring HANA 2 active:active with System ReplicationLinh Nguyen
Exploring new feature of HANA 2’s system replication -the active:active read-enabled, allowing read-only queries on secondary system’s tables, using the new operation mode ‘logreplay_readaccess’.
Note that logreplay_readaccess does not support Dynamic Tiering
The IT-Conductor monitors both the primary and secondary system.
Ingesting Data from Kafka to JDBC with Transformation and EnrichmentApache Apex
Presenter - Dr Sandeep Deshmukh, Committer Apache Apex, DataTorrent engineer
Abstract:
Ingesting and extracting data from Hadoop can be a frustrating, time consuming activity for many enterprises. Apache Apex Data Ingestion is a standalone big data application that simplifies the collection, aggregation and movement of large amounts of data to and from Hadoop for a more efficient data processing pipeline. Apache Apex Data Ingestion makes configuring and running Hadoop data ingestion and data extraction a point and click process enabling a smooth, easy path to your Hadoop-based big data project.
In this series of talks, we would cover how Hadoop Ingestion is made easy using Apache Apex. The third talk in this series would focus on ingesting unbounded data from Kafka to JDBC with couple of processing operators -Transform and enrichment.
(ATS6-PLAT07) Managing AEP in an enterprise environmentBIOVIA
Deployments can range from personal laptop usage to large enterprise environments. The installer allows both interactive and unattended installations. Key folders include Users for individual data, Jobs for temporary execution data, Shared Public for shared resources, and XMLDB for the database. Logs record job executions, authentication events, and errors. Tools like DbUtil allow backup/restore of data, pkgutil creates packages for application delivery, and regress enables test automation. Planning folder locations and maintenance is important for managing resources in an enterprise environment.
The document discusses database migration from older versions of SQL Server to SQL Server 2012. It covers migration paths, strategies like in-place and side-by-side upgrades. Methods include backup/restore, detach/attach. Pre-migration tasks like backups and checks are important. Application testing and support is also key. Very large databases require special consideration for time and disk space.
Mastering SAP Monitoring - SAP SLT & RFC Connection MonitoringLinh Nguyen
This document discusses monitoring SAP SLT (system replication) and RFC (remote function call) connections. It describes the limited monitoring capabilities available within SAP, including LTR, LTRC, LTRO and Solution Manager. It then introduces two alternative monitoring solutions - the OZSoft SAP Management Pack for Microsoft SCOM, and IT-Conductor's cloud-based application performance management. Both solutions allow for centralized, automated monitoring of SLT replication tables and RFC destinations without requiring SAP-specific software.
Tips on implementing SAP adaptive computing design with SAP LaMa on Microsoft Azure. We discuss the best options for SAP and some of the challenges faced.
BW Migration to HANA Part 3 - Post-processing on the Migrated SystemLinh Nguyen
This series of publication intends to provide an overview and explanation of major steps and considerations for BW on HANA migrations from anyDB (any database). The complex procedure involves:
1) Preparatory work in the BW system
2) SUM DMO Upgrade and Actual migration
3) Post processing on the migrated systems
This part focuses on post-processing, which includes standard tasks after upgrade and HANA-specific post-tasks.
February 2017 HUG: Exactly-once end-to-end processing with Apache ApexYahoo Developer Network
Apache Apex (http://apex.apache.org/) is a stream processing platform that helps organizations to build processing pipelines with fault tolerance and strong processing guarantees. It was built to support low processing latency, high throughput, scalability, interoperability, high availability and security. The platform comes with Malhar library - an extensive collection of processing operators and a wide range of input and output connectors for out-of-the-box integration with an existing infrastructure. In the talk I am going to describe how connectors together with the distributed checkpointing (a mechanism used by the Apex to support fault tolerance and high availability) provide exactly-once end-to-end processing guarantees.
Speakers:
Vlad Rozov is Apache Apex PMC member and back-end engineer at DataTorrent where he focuses on the buffer server, Apex platform network layer, benchmarks and optimizing the core components for low latency and high throughput. Prior to DataTorrent Vlad worked on distributed BI platform at Huawei and on multi-dimensional database (OLAP) at Hyperion Solutions and Oracle.
SAP HANA System Replication - Setup, Operations and HANA MonitoringLinh Nguyen
SAP HANA Distributed System Replication setup, operations and associated HANA Monitoring of Disaster Recovery (DR) scenario using OZSOFT HANA Management Pack for SCOM
Migrating Oracle database to PostgreSQLUmair Mansoob
This document discusses migrating an Oracle database to PostgreSQL. It covers initial discovery of the Oracle database features and data types used. A migration assessment would analyze data type mapping, additional PostgreSQL features, and testing requirements. Challenges include porting PL/SQL code, minimizing downtime during migration, and comprehensive testing of applications on the new PostgreSQL platform. Migrating large data sets and ensuring performance for critical applications are also challenges.
David Yan offers an overview of Apache Apex, a stream processing engine used in production by several large companies for real-time data analytics.
Apache Apex uses a programming paradigm based on a directed acyclic graph (DAG). Each node in the DAG represents an operator, which can be data input, data output, or data transformation. Each directed edge in the DAG represents a stream, which is the flow of data from one operator to another.
As part of Apex, the Malhar library provides a suite of connector operators so that Apex applications can read from or write to various data sources. It also includes utility operators that are commonly used in streaming applications, such as parsers, deduplicators and join, and generic building blocks that facilitate scalable state management and checkpointing.
In addition to processing based on ingression time and processing time, Apex supports event-time windows and session windows. It also supports windowing, watermarks, allowed lateness, accumulation mode, triggering, and retraction detailed by Apache Beam as well as feedback loops in the DAG for iterative processing and at-least-once and “end-to-end” exactly-once processing guarantees. Apex provides various ways to fine-tune applications, such as operator partitioning, locality, and affinity.
Apex is integrated with several open source projects, including Apache Beam, Apache Samoa (distributed machine learning), and Apache Calcite (SQL-based application specification). Users can choose Apex as the backend engine when running their application model based on these projects.
David explains how to develop fault-tolerant streaming applications with low latency and high throughput using Apex, presenting the programming model with examples and demonstrating how custom business logic can be integrated using both the declarative high-level API and the compositional DAG-level API.
Building and Deploying Large Scale SSRS using Lessons Learned from Customer D...Denny Lee
This document discusses lessons learned from deploying large scale SQL Server Reporting Services (SSRS) environments based on customer scenarios. It covers the key aspects of success, scaling out the architecture, performance optimization, and troubleshooting. Scaling out involves moving report catalogs to dedicated servers and using a scale out deployment architecture. Performance is optimized through configurations like disabling report history and tuning memory settings. Troubleshooting utilizes logs, monitoring, and diagnosing issues like out of memory errors.
Antonios Chatzipavlis is a database architect and SQL Server expert with over 30 years of experience working with SQL Server. The document provides tips for installing and configuring SQL Server correctly, including selecting the appropriate server hardware, installing Windows, configuring disks and storage, installing and configuring SQL Server, and creating user databases. The goal is to optimize performance and reliability based on best practices.
VMworld 2013: Strategic Reasons for Classifying Workloads for Tier 1 Virtuali...VMworld
This document discusses the importance of classifying workloads before virtualizing tier 1 applications. Workload classification involves measuring existing application and database workloads to properly size and place them in a new virtualized environment. This reduces risks and speeds up implementation by providing the proper analysis. The document outlines challenges, opportunities, models, metrics, tools and an example MolsonCoors used workload classification to virtualize their SAP landscape.
This document provides guidelines for using Oracle Database In-Memory (IM) with SAP applications. It describes two approaches: 1) Using the Oracle Database IM Advisor to identify SAP tables to place in the IM column store, and 2) Manually identifying SAP tables to place in the IM column store based on memory requirements. The IM Advisor requires collecting Automatic Workload Repository (AWR) statistics over multiple days to provide accurate recommendations for SAP workloads. Additional steps are needed to filter the IM Advisor results to identify SAP tables suitable for IM.
Storage Optimization and Operational Simplicity in SAP Adaptive Server Enter...SAP Technology
This presentation will discuss the key storage optimization and operational simplicity features available in SAP ASE and introduce enhancements such as heat map providing the capability to move data to high/low performing storage devices based on access patterns.
This presentation is the definitive list of literally every possible technique for making tempdb faster. It's been run at multiple events around the world and it keeps getting bigger and better.
Learn how to execute a SAP S/4HANA Migration - what are the things you need to know? Also check out the repeat of the webinar here! https://sap.na.pgiconnect.com/p2480o9iw1f/
Webinar slides: Our Guide to MySQL & MariaDB Performance TuningSeveralnines
If you’re asking yourself the following questions when it comes to optimally running your MySQL or MariaDB databases:
- How do I tune them to make best use of the hardware?
- How do I optimize the Operating System?
- How do I best configure MySQL or MariaDB for a specific database workload?
Then this replay is for you!
We discuss some of the settings that are most often tweaked and which can bring you significant improvement in the performance of your MySQL or MariaDB database. We also cover some of the variables which are frequently modified even though they should not.
Performance tuning is not easy, especially if you’re not an experienced DBA, but you can go a surprisingly long way with a few basic guidelines.
This webinar builds upon blog posts by Krzysztof from the ‘Become a MySQL DBA’ series.
AGENDA
- What to tune and why?
- Tuning process
- Operating system tuning
- Memory
- I/O performance
- MySQL configuration tuning
- Memory
- I/O performance
- Useful tools
- Do’s and do not’s of MySQL tuning
- Changes in MySQL 8.0
SPEAKER
Krzysztof Książek, Senior Support Engineer at Severalnines, is a MySQL DBA with experience managing complex database environments for companies like Zendesk, Chegg, Pinterest and Flipboard.
SAP #BOBJ #BI 4.1 Upgrade Webcast Series 3: BI 4.1 Sizing and VirtualizationSAP Analytics
http://spr.ly/BI41_Migration_Webinars - Learn how to develop a good strategy for sizing your SAP BusinessObjects BI 4.1 deployment. Understand why core architectural differences between former BOE XI releases and BI 4.1 mandate new sizing considerations. Find out how to test and tune your BI system before releasing it to your user base. Also, learn about virtualization support and guidelines when deploying to virtual and cloud environments.
• Understand how and where virtualization works well
• Learn how to avoid difficult situations when managing virtualized resources
• Develop a strategy that allows for growth, and talk the same language as your administrators
For more on upgrading to SAP BusinessObjects BI 4.1, visit http://www.sapbusinessobjectsbi.com
SA114 - Virtual Notesiality! - How the Notes client and Browser Plugin can ex...Daniel Reimann
The document provides best practices for installing and configuring IBM Notes and ICAA in virtual environments. It discusses using local multi-user installations of Notes to avoid network latency issues. It recommends configuring Notes to use roaming so that user data is available regardless of which virtual server a user logs into. The document also provides tips for optimizing the Notes installation, such as setting default ODS versions, configuring notes.ini, using a configuration file, and sharing the jvm.shareclasses file across users.
Come along to this session to learn how large scale systems like SAP, Oracle, Microsoft and others are being used by enterprise customers of all shapes and sizes. In this session you will discover some of the challenges and approaches that will make you successful in deploying and operating these systems on AWS. This is a must session for enterprise customers that are looking at moving material workloads into the cloud.
This document discusses using a traditional migration design for high-volume data integration projects. It notes that transactional integration designs may not be fast enough when large amounts of data need to be integrated initially. The session agenda covers best practices for performance, an overview of integration and migration design patterns, and five migration design practices: using bulk processing when possible, upsert operations, using local resources for lookups, staging data, and multi-processing.
SAP/HANA Financial Closing can help you ACCELERATE your financial closing cycle. Benefit from increased governance, higher user efficiency and automation, strong collaboration, and real-time insight.
Sap basis training demo basis online training in usa,uk and indiamagnificsmile
www.Magnifictraining.com-sap basis ONLINE TRAINING. contact us: info@magnifictraining.com
or call us: +919052666559,919052666558 sap technologies like sap basis,sap ewm,sap basis,sap basis ,sap basis,
sap hcm ,sap bi/bw, sap is banking,sap srm,
sap gts online training by industrail hands on training on sap basis online TRAINING.
SAP BASIS Online Training Course Contents :
What is SAP?
An introduction to ERP
An introduction to SAP
SAP AG: Evolution & Strategy
SAP Product Evolution
SAP Services Overview ( OSS )
An introduction to BASIS
Basics to startup with BASIS Administration
An introduction to Operating Systems
An introduction to Database Systems
Overview of computer Networks
Network types & devices
Protocol & IP Address concepts
An introduction to Kernel Software
Description of R/3
Client / Server Solution
Overview of 3 layer interface
Sap basis training demo basis online training in usa,uk and indiamagnifics
This document provides an overview of SAP Basis training. It discusses the SAP architecture including how transactions are processed, the different work processes, and basic Basis functions. The key points are:
1) When a user submits a transaction request, it is assigned to a work process by the dispatcher which fulfills the request by accessing the database and communicating with other servers.
2) Work processes include dialog, update, enqueue, batch, and spool processes that handle different transaction types and tasks like database changes.
3) Basis functions involve user administration, client maintenance, transporting changes, performance monitoring, and more administrative and support roles for the SAP system.
Aliter Consulting's latest challenge on a customer project was the integration of SAP on Azure into the customer’s SaaS Office 365 environment for outbound and inbound email for SAP S/4HANA to support inbound email for OpenText VIM and SAP GRC, and other general outbound mail requirements...
This document provides instructions for setting up SSL connectivity between SAP LVM and the SAP Host Agent using x509 certificate authentication. It involves generating a certificate signing request for the LVM server, having it signed by a certificate authority, uploading the signed certificate and CA/ICA certificates to the LVM keystore. It also describes adding the CA/ICA certificates to the Host Agent's PSE, configuring the host profile, and testing the SSL connection between LVM and the Host Agent.
This document provides instructions for integrating SAP Business Process Automation (BPA) with SAP Landscape Virtualization Management (LVM). It involves creating a custom operation in LVM that allows controlling BPA queues. This is done by creating a provider implementation and custom operation in LVM along with a process definition and web service in BPA. It also requires registering a script with the host agent to connect the LVM and BPA configurations. The custom operation then allows holding or releasing BPA queues from the LVM interface.
This document provides an overview of how to customize SAP Landscape Virtualization Management (LVM) with custom operations and hooks. It describes defining a provider implementation ("LVM_CustomOperation_ClusterAdm") and custom operations ("Freeze", "Unfreeze", "Relocate") for managing a Red Hat cluster. A sample script ("ClusterAdm.ksh") demonstrates how custom operations could freeze/unfreeze the cluster before SAP instance start/stop operations. The provider implementation and custom operations/hooks allow LVM to integrate cluster management operations.
This document provides instructions for installing SAP Router using Secure Network Communication (SNC) and registering it with SAP. It outlines downloading the installation files, creating a dedicated system user and filesystem, unpacking and configuring the software, generating and importing an SNC certificate, creating a router table, and starting/stopping the SAP Router service.
This document provides guidance on customizing SAP Landscape Virtualization Management (LVM) to manage custom instance types. It describes how to configure generic operations like detect, monitor, start, and stop by creating scripts referenced in configuration files. An example is provided for managing SAP Replication Server (SRS) instances, with configuration files and sample scripting code shown.
The document discusses SAP Web Dispatcher 7.40, which is a load balancer that provides intelligent load distribution for SAP Portal. It can handle stateful or stateless sessions over HTTP or HTTPS invisibly to clients. It supports round-robin load distribution for non-SAP backends like Tomcat. It also allows for multiple SSL certificates to handle multiple domains and backends. SAP Web Dispatcher provides reliability, security, and high performance to handle thousands of concurrent users. It includes features like maintenance mode, custom error pages, and is free to use with an SAP license.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
20 Comprehensive Checklist of Designing and Developing a WebsitePixlogix Infotech
Dive into the world of Website Designing and Developing with Pixlogix! Looking to create a stunning online presence? Look no further! Our comprehensive checklist covers everything you need to know to craft a website that stands out. From user-friendly design to seamless functionality, we've got you covered. Don't miss out on this invaluable resource! Check out our checklist now at Pixlogix and start your journey towards a captivating online presence today.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
2. • This document provides a brief summary of experiences and
lessons learned following a recent customer migra9on from
Oracle to ASE
• The document is aimed at technical consultants involved with
migra9on to ASE
Introduc3on
3. • 2087322 -‐ SYB: Where to find informa9on about SAP
applica9ons on SAP ASE
• SAP ASE is cer9fied for use with SAP Business Suite or as a
standalone database plaKorm
• There are always two delivery channels for the SAP ASE
soLware binaries
• Check if your use case is supported and that you download the
relevant soLware version
• Ensure that you validate the informa9on within the SAP notes
to ensure it’s relevant to your use case
#1
–
Business
Suite
Compa3bility
4. • Download the latest SP from the SAP Support Portal
• The version of SAP ASE will make a big difference when
considering the implementa9on project
• Consult with your SAP TQM to ensure you plan to be on the
op9mum version and patch level of SAP ASE for your project
9melines
• SAP ASE patches are released as frequently as every 3 months
which can poten9ally contain fixes for a possible data loss or
data corrup9on scenario
#2
–
Download
the
Latest
5. • hVp://wiki.scn.sap.com/wiki/display/SYBASE/Targeted+ASE+15.x
+Release+Schedule+and+CR+list+Informa9on
• Revisions to SAP ASE soLware are performed rapidly so plan to
patch at least every 3 to 6 months at minimum
• Failing to remain current may create issues with other 9ghtly
associated soLware areas (such as SAP Replica9on Server)
• Revisions can be delayed drama9cally (by months) only to be
superseded almost immediately by a later revision
• “Hot fixes” exist, whereby a current revision receives an
addi9onal increment in between the previous and the latest
#3
–
Stay
Current
6. • 1539124 -‐ SYB: Database Configura9on for SAP applica9ons on
SAP ASE
• 1619967 -‐ SYB: DBA Cockpit Correc9on Collec9on SAP Basis
7.31
• Configura9on of parameters of SAP ASE database is performed
against one core SAP note.
• It has a messy layout and can be complex to digest manually.
• Changing the parameters to those recommended by SAP is a
must as out-‐of-‐the-‐box configura9on is never op9mal or
poten9ally even unstable
• DBA Cockpit is your friend and allows easy valida9on of the
parameters depending on your NetWeaver release
#4
–
Ensure
Correct
Parameteriza3on
7. • Erroneous or conflic9ng support statements within SAP notes
can cause confusion
• Some9mes the SAP ASE standalone community forget SAP ASE
can run underneath SAP Business Suite
• If you see a SAP note sta9ng you’re not supported if you do “X”
or have “Y” installed, query it with your TQM as it may not be
relevant to your use case
• Get the account manager involved with your project
#5
–
Confirm
Statements
with
TQM
8. • 1749935 -‐ SYB: Configura9on Guide for SAP ASE 15.7
• 1581695 -‐ SYB: Configura9on Guide for SAP ASE 16.0
• Some parameters listed in SAP notes will be specific to SAP BW
or SAP ERP
• The old OLAP versus OLTP tuning issue is s9ll relevant
• Double check the SAP note containing the SAP ASE
recommended parameters and don’t just blindly apply it
#6
–
Set
Relevant
DB
Parameters
9. • Patching SAP ASE is simple so expend the 9me saved by tes9ng
• Include both func9onal, technical and opera9onal tes9ng
including your system copy process
• Performance tes9ng is a must with the change in database
• Issues detected may take 9me to resolve with workarounds
possible
• But issue may be fixed in a later SAP ASE revision – keep an eye
on the important notes
#7
–
Perform
Rigorous
Tes3ng
10. • 2077419 -‐ Targeted ASE 15.x Release Schedule and CR list
Informa9on
• Check the bug lis9ng of the next revision carefully
• It could save you from poten9al corrup9on or an unfixable
situa9on
• SAP ASE bugs are not listed in individual SAP notes but in the
Release Informa9on Note for the next revision
#8
–
Always
Check
Bug
List
11. • 1618817 -‐ SYB: How to restore an SAP ASE database server
(UNIX)
• 1585981 -‐ SYB: Ensuring Recoverability for SAP ASE
• The log files for the database, jobserver and backupserver do
not rotate un9l the SAP ASE instance is restarted
• Keep these files 9dy and compressed with your own
housekeeping scripts
• Recommenda9ons exist for retaining certain files such as the
last config file, the dumphist file and export of sysdevices table,
on a separate file system
#9
–
Configure
Housekeeping
12. • Out-‐of-‐the-‐box the performance of backups and restores is
adequate.
• A 1.3TB database within 1 stripe can take in excess of 4 hours
(to a DataDomain appliance such as EMC Avamar)
• Spend 9me to performance tune by adjus9ng one SAP ASE
parameter and you could reduce run9me by as much as 30%
• Make sure that you test the restore capability
• Allocate adequate disk space for emergency backups (dumps)
to disk if you’re planning to backup to a third-‐party tool
• Allocate adequate disk space for transac9on log in case of
emergency situa9ons
#10
–
Tune
Backup
for
Performance
13. • 1996340 -‐ SYB: Default RSDB profile parameters for SAP ASE
– Failure to set these parameters correctly will lead to performance
problems during SELECT with IN lists
• During a database plaKorm migra9on ensure that you re-‐visit
the relevance of any database specific parameters especially
those concerned with DBSL level interac9ons.
• Search for notes in component BC-‐DB-‐SYB and order in date
descending then filter for relevancy against your NW release
and SP level
#11
–
Re-‐Visit
NetWeaver
Parameters
14. • 1702338 -‐ SYB: Database hints in Open SQL for Sybase ASE
– Failure to re-‐visit any hints you had previously specified for your source
database may lead to unexpected performance problems
• Your old database plaKorm hints will be ineffec9ve on the new
database
• Consider valida9ng whether new hints for ASE are required, or
whether the new op9mizer will automa9cally cope
• Ensure that you know how to “EXPLAIN PLAN” as you’ll need it!
• Budget project 9me for performance tuning of SQL especially in
custom code
#12
–
Re-‐Visit
Any
SQL
Hints
15. • 2162183 -‐ SYB: Frequently Asked Ques9ons for SAP ASE
– good star9ng point for other notes
• 1946048 -‐ Too many UPDATES to Table SWNCMONI
– without this note, high transac9on log volumes may be experienced
• 2276031 -‐ Deac9va9on of BAdi ICF_STAT_COLLECTOR
– without this note, high transac9on log volumes may be experienced
Other
Useful
Notes