The document discusses secrets and best practices for optimizing the performance of an OLTP system. It describes how the speaker's team was able to reduce response times by 50% through focused tuning of the application to database interface. Some techniques that helped include identifying redundant database calls, reducing round trips by passing data in arrays, processing data in bulk using INSERT statements, and returning less unused data. The document provides recommendations for locking strategies, using JDBC features like arrays and batching, and setting the optimal row prefetch.
Columnar Table Performance Enhancements Of Greenplum Database with Block Meta...Ontico
HighLoad++ 2017
Зал «Рио-де-Жанейро», 7 ноября, 13:00
Тезисы:
http://www.highload.ru/2017/abstracts/2923.html
Alibaba built up a data warehouse service named HybridDB in its public cloud service, based on the open sourced Greenplum Database. And it keeps on enhancing HybridDB's preformance. This presentation will talk about how Alibaba improves HybridDB's performance for columnar tables with data block's meta data (MIN/MAX values of block data) and sort keys (pre-defined keys that data will be sorted and stored with). Testing result shows that, block metadata can be generated on-the-fly without much overhead, but can achive better performance even than index scan. With sort keys, a constant response time can be archived for GROUP-BY and ORDER-BY queries.
(ATS6-PLAT07) Managing AEP in an enterprise environmentBIOVIA
Accelrys Enterprise Platform use within an Enterprise environment spans from Power users of Pipeline Pilot to web applications and High Performance Computing. Managing the balance between productivity and enterprise policies can be tricky. This session will focus on exposing the tools and processes needed by administrators to enable users to be productive, yet allowing IT to remain in control.
Ingesting Data from Kafka to JDBC with Transformation and EnrichmentApache Apex
Presenter - Dr Sandeep Deshmukh, Committer Apache Apex, DataTorrent engineer
Abstract:
Ingesting and extracting data from Hadoop can be a frustrating, time consuming activity for many enterprises. Apache Apex Data Ingestion is a standalone big data application that simplifies the collection, aggregation and movement of large amounts of data to and from Hadoop for a more efficient data processing pipeline. Apache Apex Data Ingestion makes configuring and running Hadoop data ingestion and data extraction a point and click process enabling a smooth, easy path to your Hadoop-based big data project.
In this series of talks, we would cover how Hadoop Ingestion is made easy using Apache Apex. The third talk in this series would focus on ingesting unbounded data from Kafka to JDBC with couple of processing operators -Transform and enrichment.
Columnar Table Performance Enhancements Of Greenplum Database with Block Meta...Ontico
HighLoad++ 2017
Зал «Рио-де-Жанейро», 7 ноября, 13:00
Тезисы:
http://www.highload.ru/2017/abstracts/2923.html
Alibaba built up a data warehouse service named HybridDB in its public cloud service, based on the open sourced Greenplum Database. And it keeps on enhancing HybridDB's preformance. This presentation will talk about how Alibaba improves HybridDB's performance for columnar tables with data block's meta data (MIN/MAX values of block data) and sort keys (pre-defined keys that data will be sorted and stored with). Testing result shows that, block metadata can be generated on-the-fly without much overhead, but can achive better performance even than index scan. With sort keys, a constant response time can be archived for GROUP-BY and ORDER-BY queries.
(ATS6-PLAT07) Managing AEP in an enterprise environmentBIOVIA
Accelrys Enterprise Platform use within an Enterprise environment spans from Power users of Pipeline Pilot to web applications and High Performance Computing. Managing the balance between productivity and enterprise policies can be tricky. This session will focus on exposing the tools and processes needed by administrators to enable users to be productive, yet allowing IT to remain in control.
Ingesting Data from Kafka to JDBC with Transformation and EnrichmentApache Apex
Presenter - Dr Sandeep Deshmukh, Committer Apache Apex, DataTorrent engineer
Abstract:
Ingesting and extracting data from Hadoop can be a frustrating, time consuming activity for many enterprises. Apache Apex Data Ingestion is a standalone big data application that simplifies the collection, aggregation and movement of large amounts of data to and from Hadoop for a more efficient data processing pipeline. Apache Apex Data Ingestion makes configuring and running Hadoop data ingestion and data extraction a point and click process enabling a smooth, easy path to your Hadoop-based big data project.
In this series of talks, we would cover how Hadoop Ingestion is made easy using Apache Apex. The third talk in this series would focus on ingesting unbounded data from Kafka to JDBC with couple of processing operators -Transform and enrichment.
Kafka to Hadoop Ingest with Parsing, Dedup and other Big Data TransformationsApache Apex
Presenter:
Chaitanya Chebolu, Committer for Apache Apex and Software Engineer at DataTorrent.
In this session we will cover the use-case of ingesting data from Kafka and writing to HDFS with a couple of processing operators - Parser, Dedup, Transform.
Intro to YARN (Hadoop 2.0) & Apex as YARN App (Next Gen Big Data)Apache Apex
Presenter:
Priyanka Gugale, Committer for Apache Apex and Software Engineer at DataTorrent.
In this session we will cover introduction to Yarn, understanding yarn architecture as well as look into Yarn application lifecycle. We will also learn how Apache Apex is one of the Yarn applications in Hadoop.
This is a high level presentation I delivered at BIWA Summit. It's just some high level thoughts related to today's NoSQL and Hadoop SQL engines (not deeply technical).
Apache Big Data EU 2016: Next Gen Big Data Analytics with Apache ApexApache Apex
Stream data processing is becoming increasingly important to support business needs for faster time to insight and action with growing volume of information from more sources. Apache Apex (http://apex.apache.org/) is a unified big data in motion processing platform for the Apache Hadoop ecosystem. Apex supports demanding use cases with:
* Architecture for high throughput, low latency and exactly-once processing semantics.
* Comprehensive library of building blocks including connectors for Kafka, Files, Cassandra, HBase and many more
* Java based with unobtrusive API to build real-time and batch applications and implement custom business logic.
* Advanced engine features for auto-scaling, dynamic changes, compute locality.
Apex was developed since 2012 and is used in production in various industries like online advertising, Internet of Things (IoT) and financial services.
Intro to Apache Apex - Next Gen Platform for Ingest and TransformApache Apex
Introduction to Apache Apex - The next generation native Hadoop platform. This talk will cover details about how Apache Apex can be used as a powerful and versatile platform for big data processing. Common usage of Apache Apex includes big data ingestion, streaming analytics, ETL, fast batch alerts, real-time actions, threat detection, etc.
Bio:
Pramod Immaneni is Apache Apex PMC member and senior architect at DataTorrent, where he works on Apache Apex and specializes in big data platform and applications. Prior to DataTorrent, he was a co-founder and CTO of Leaf Networks LLC, eventually acquired by Netgear Inc, where he built products in core networking space and was granted patents in peer-to-peer VPNs.
The Query Service is the new platform solution for querying a variety of data sources. The goal of Query Service is that administrators can configure a metadata description of the data source that can then be used by end users without detailed knowledge of the underlying data source. This session explains how to configure Query Service data sources and use them with the RESTful API or component collection.
University program - writing an apache apex applicationAkshay Gore
This presentation was delivered to engineering students from Computer, IT, Electronics background. This was lab hands on session on Apache Apex. The lab session was conducted after having lecture on introduction to Apex.
Advanced users of Apex/experts may not find this relevant.
Hekaton is the original project name for In-Memory OLTP and just sounds cooler for a title name. Keeping up the tradition of deep technical “Inside” sessions at PASS, this half-day talk will take you behind the scenes and under the covers on how the In-Memory OLTP functionality works with SQL Server.
We will cover “everything Hekaton”, including how it is integrated with the SQL Server Engine Architecture. We will explore how data is stored in memory and on disk, how I/O works, how native complied procedures are built and executed. We will also look at how Hekaton integrates with the rest of the engine, including Backup, Restore, Recovery, High-Availability, Transaction Logging, and Troubleshooting.
Demos are a must for a half-day session like this and what would an inside session be if we didn’t bring out the Windows Debugger. As with previous “Inside…” talks I’ve presented at PASS, this session is level 500 and not for the faint of heart. So read through the docs on In-Memory OLTP and bring some extra pain reliever as we move fast and go deep.
This session will appear as two sessions in the program guide but is not a Part I and II. It is one complete session with a small break so you should plan to attend it all to get the maximum benefit.
Apache Apex: Stream Processing Architecture and ApplicationsThomas Weise
Slides from http://www.meetup.com/Hadoop-User-Group-Munich/events/230313355/
This is an overview of architecture with use cases for Apache Apex, a big data analytics platform. It comes with a powerful stream processing engine, rich set of functional building blocks and an easy to use API for the developer to build real-time and batch applications. Apex runs natively on YARN and HDFS and is used in production in various industries. You will learn more about two use cases: A leading Ad Tech company serves billions of advertising impressions and collects terabytes of data from several data centers across the world every day. Apex was used to implement rapid actionable insights, for real-time reporting and allocation, utilizing Kafka and files as source, dimensional computation and low latency visualization. A customer in the IoT space uses Apex for Time Series service, including efficient storage of time series data, data indexing for quick retrieval and queries at high scale and precision. The platform leverages the high availability, horizontal scalability and operability of Apex.
Apache Apex Fault Tolerance and Processing SemanticsApache Apex
Components of an Apex application running on YARN, how they are made fault tolerant, how checkpointing works, recovery from failures, incremental recovery, processing guarantees.
Dependency Injection in Apache Spark ApplicationsDatabricks
Dependency Injection is a programming paradigm that allows for cleaner, reusable, and more easily extensible code. Though Dependency injection has existed for a while now, its use for wiring dependencies in Apache Spark applications is relatively new. In this talk, we present our adventures writing testable Spark applications with dependency injection and explain why it is different than wiring dependencies for web applications due to Spark’s unique programming model.
Proactive performance monitoring with adaptive thresholdsJohn Beresniewicz
Presentation given at UKOUG 2008 conference on the Adaptive Thresholds technology in Oracle database 10.2+ and Enterprise Manager 11. Adaptive Thresholds allows users to do consistent and effective performance monitoring across systems and architectures by using statistical characterization of metric streams to automatically set and adapt monitoring thresholds independent of application workload.
Optimizing Query is very important to improve the performance of the database. Analyse query using query execution plan, create cluster index and non-cluster index and create indexed views
Building Your First Apache Apex (Next Gen Big Data/Hadoop) ApplicationApache Apex
This webinar will be a hands-on demonstration of how to clone and build the Apache Apex source code repositories, how to run the maven archetype to create a new Apex project, how to enhance it to build a word counting application and finally, how to run it and view results. We will also do a brief code walkthrough.
Bio:
Dr. Munagala V. Ramanath is a Committer for Apache Apex and a Software Engineer at DataTorrent. He has many years experience working for a variety of companies in California and a Ph.D. in Computer Science from the University of Wisconsin, Madison.
Kafka to Hadoop Ingest with Parsing, Dedup and other Big Data TransformationsApache Apex
Presenter:
Chaitanya Chebolu, Committer for Apache Apex and Software Engineer at DataTorrent.
In this session we will cover the use-case of ingesting data from Kafka and writing to HDFS with a couple of processing operators - Parser, Dedup, Transform.
Intro to YARN (Hadoop 2.0) & Apex as YARN App (Next Gen Big Data)Apache Apex
Presenter:
Priyanka Gugale, Committer for Apache Apex and Software Engineer at DataTorrent.
In this session we will cover introduction to Yarn, understanding yarn architecture as well as look into Yarn application lifecycle. We will also learn how Apache Apex is one of the Yarn applications in Hadoop.
This is a high level presentation I delivered at BIWA Summit. It's just some high level thoughts related to today's NoSQL and Hadoop SQL engines (not deeply technical).
Apache Big Data EU 2016: Next Gen Big Data Analytics with Apache ApexApache Apex
Stream data processing is becoming increasingly important to support business needs for faster time to insight and action with growing volume of information from more sources. Apache Apex (http://apex.apache.org/) is a unified big data in motion processing platform for the Apache Hadoop ecosystem. Apex supports demanding use cases with:
* Architecture for high throughput, low latency and exactly-once processing semantics.
* Comprehensive library of building blocks including connectors for Kafka, Files, Cassandra, HBase and many more
* Java based with unobtrusive API to build real-time and batch applications and implement custom business logic.
* Advanced engine features for auto-scaling, dynamic changes, compute locality.
Apex was developed since 2012 and is used in production in various industries like online advertising, Internet of Things (IoT) and financial services.
Intro to Apache Apex - Next Gen Platform for Ingest and TransformApache Apex
Introduction to Apache Apex - The next generation native Hadoop platform. This talk will cover details about how Apache Apex can be used as a powerful and versatile platform for big data processing. Common usage of Apache Apex includes big data ingestion, streaming analytics, ETL, fast batch alerts, real-time actions, threat detection, etc.
Bio:
Pramod Immaneni is Apache Apex PMC member and senior architect at DataTorrent, where he works on Apache Apex and specializes in big data platform and applications. Prior to DataTorrent, he was a co-founder and CTO of Leaf Networks LLC, eventually acquired by Netgear Inc, where he built products in core networking space and was granted patents in peer-to-peer VPNs.
The Query Service is the new platform solution for querying a variety of data sources. The goal of Query Service is that administrators can configure a metadata description of the data source that can then be used by end users without detailed knowledge of the underlying data source. This session explains how to configure Query Service data sources and use them with the RESTful API or component collection.
University program - writing an apache apex applicationAkshay Gore
This presentation was delivered to engineering students from Computer, IT, Electronics background. This was lab hands on session on Apache Apex. The lab session was conducted after having lecture on introduction to Apex.
Advanced users of Apex/experts may not find this relevant.
Hekaton is the original project name for In-Memory OLTP and just sounds cooler for a title name. Keeping up the tradition of deep technical “Inside” sessions at PASS, this half-day talk will take you behind the scenes and under the covers on how the In-Memory OLTP functionality works with SQL Server.
We will cover “everything Hekaton”, including how it is integrated with the SQL Server Engine Architecture. We will explore how data is stored in memory and on disk, how I/O works, how native complied procedures are built and executed. We will also look at how Hekaton integrates with the rest of the engine, including Backup, Restore, Recovery, High-Availability, Transaction Logging, and Troubleshooting.
Demos are a must for a half-day session like this and what would an inside session be if we didn’t bring out the Windows Debugger. As with previous “Inside…” talks I’ve presented at PASS, this session is level 500 and not for the faint of heart. So read through the docs on In-Memory OLTP and bring some extra pain reliever as we move fast and go deep.
This session will appear as two sessions in the program guide but is not a Part I and II. It is one complete session with a small break so you should plan to attend it all to get the maximum benefit.
Apache Apex: Stream Processing Architecture and ApplicationsThomas Weise
Slides from http://www.meetup.com/Hadoop-User-Group-Munich/events/230313355/
This is an overview of architecture with use cases for Apache Apex, a big data analytics platform. It comes with a powerful stream processing engine, rich set of functional building blocks and an easy to use API for the developer to build real-time and batch applications. Apex runs natively on YARN and HDFS and is used in production in various industries. You will learn more about two use cases: A leading Ad Tech company serves billions of advertising impressions and collects terabytes of data from several data centers across the world every day. Apex was used to implement rapid actionable insights, for real-time reporting and allocation, utilizing Kafka and files as source, dimensional computation and low latency visualization. A customer in the IoT space uses Apex for Time Series service, including efficient storage of time series data, data indexing for quick retrieval and queries at high scale and precision. The platform leverages the high availability, horizontal scalability and operability of Apex.
Apache Apex Fault Tolerance and Processing SemanticsApache Apex
Components of an Apex application running on YARN, how they are made fault tolerant, how checkpointing works, recovery from failures, incremental recovery, processing guarantees.
Dependency Injection in Apache Spark ApplicationsDatabricks
Dependency Injection is a programming paradigm that allows for cleaner, reusable, and more easily extensible code. Though Dependency injection has existed for a while now, its use for wiring dependencies in Apache Spark applications is relatively new. In this talk, we present our adventures writing testable Spark applications with dependency injection and explain why it is different than wiring dependencies for web applications due to Spark’s unique programming model.
Proactive performance monitoring with adaptive thresholdsJohn Beresniewicz
Presentation given at UKOUG 2008 conference on the Adaptive Thresholds technology in Oracle database 10.2+ and Enterprise Manager 11. Adaptive Thresholds allows users to do consistent and effective performance monitoring across systems and architectures by using statistical characterization of metric streams to automatically set and adapt monitoring thresholds independent of application workload.
Optimizing Query is very important to improve the performance of the database. Analyse query using query execution plan, create cluster index and non-cluster index and create indexed views
Building Your First Apache Apex (Next Gen Big Data/Hadoop) ApplicationApache Apex
This webinar will be a hands-on demonstration of how to clone and build the Apache Apex source code repositories, how to run the maven archetype to create a new Apex project, how to enhance it to build a word counting application and finally, how to run it and view results. We will also do a brief code walkthrough.
Bio:
Dr. Munagala V. Ramanath is a Committer for Apache Apex and a Software Engineer at DataTorrent. He has many years experience working for a variety of companies in California and a Ph.D. in Computer Science from the University of Wisconsin, Madison.
The Young Advertising Professionals invite you to join them for the Third Career ADvice seminar of the year, "The Science of Networking" with Michael Strickland. Michael will take a look at: What's the best way to network? How do you overcome networking anxieties? What should/shouldn't you do when attending a networking event? What is the best way to continue making a connection after you leave a networking event?
How to Become a Thought Leader in Your NicheLeslie Samuel
Are bloggers thought leaders? Here are some tips on how you can become one. Provide great value, put awesome content out there on a regular basis, and help others.
Lessons Learned Replatforming A Large Machine Learning Application To Apache ...Databricks
Morningstar’s Risk Model project is created by stitching together statistical and machine learning models to produce risk and performance metrics for millions of financial securities. Previously, we were running a single version of this application, but needed to expand it to allow for customizations based on client demand. With the goal of running hundreds of custom Risk Model runs at once at an output size of around 1TB of data each, we had a challenging technical problem on our hands! In this presentation, we’ll talk about the challenges we faced replatforming this application to Spark, how we solved them, and the benefits we saw.
Some things we’ll touch on include how we created customized models, the architecture of our machine learning application, how we maintain an audit trail of data transformations (for rigorous third party audits), and how we validate the input data our model takes in and output data our model produces. We want the attendees to walk away with some key ideas of what worked for us when productizing a large scale machine learning platform.
Give you a brief overview of the product. - What is esProc SPL? And show some cases helping you to know what it uses for. Talk about why esProc works better. And overview its brief characteristics. After that, Introduce the main technical solutions which esProc is often used.
Java Developers, make the database work for you (NLJUG JFall 2010)Lucas Jellema
The general consensus among Java developers has evolved from a dogmatic strive for database independence to a much more pragmatic wish to leverage the power of the database. This session demonstrates some of the (hidden) powers of the database and how these can be utilized from Java applications using either straight JDBC or working through JPA. The Oracle database is used as example: SQL for Aggregation and Analysis, Flashback Queries for historical comparison and trends, Virtual Private Database, complex validation, PL/SQL and collections for bulk data manipulation, view and instead-of triggers for data model morphing, server push of relevant data changes, edition based redefinition for release management.
- overview of role of database in JEE architecture (and a little history on how the database is perceived through the years)
- discussion on the development of database functionality
- demonstration of some powerful database features
- description of how we leveraged these features in our JSF (RichFaces)/JPA (Hibernate) application
- demo of web application based on these features
- discussion on how to approach the database
Presentation on the long forgotten core performance principles of a DBMS's process based software architecture. And how modern application development violates all these principles.
Search on the fly: how to lighten your Big Data - Simona Russo, Auro Rolle - ...Codemotion
The talk presents a new technique of realtime single entity information extraction and investigation. The technique eliminates regular refresh and persistence of data within the search engine (ETL), providing real-time access to source data and improving response times using in-memory data techniques. The solution presented is a concrete solution with live customers, based upon real business needs. I will explain the architectural overview, the technology stack used based on Apache Lucene library, the accomplished results and how to scale out the solution.
Antes de migrar de 10g a 11g o 12c, tome en cuenta las siguientes consideraciones. No es tan sencillo como simplemente cambiar de motor de base de datos, se necesita hacer consideraciones a nivel del aplicativo.
Autonomous Transaction Processing (ATP): In Heavy Traffic, Why Drive Stick?Jim Czuprynski
Autonomous Transaction Processing (ATP) - the second in the family of Oracle’s Autonomous Databases – offers Oracle DBAs the ability to apply a force multiplier for their OLTP database application workloads. However, it’s important to understand both the benefits and limitations of ATP before migrating any workloads to that environment. I'll offer a quick but deep dive into how best to take advantage of ATP - including how to load data quickly into the underlying database – and some ideas on how ATP will impact the role of Oracle DBA in the immediate future. (Hint: Think automatic transmission instead of stick-shift.)
OracleStore: A Highly Performant RawStore Implementation for Hive MetastoreDataWorks Summit
Today, Yahoo! uses Hive in many different spaces, from ETL pipelines to adhoc user queries. Increasingly, we are investigating the practicality of applying Hive to real-time queries, such as those generated by interactive BI reporting systems. In order for Hive to succeed in this space, it must be performant in all aspects of query execution, from query compilation to job execution. One such component is the interaction with the underlying database at the core of the Metastore.
As an alternative to ObjectStore, we created OracleStore as a proof-of-concept. Freed of the restrictions imposed by DataNucleus, we were able to design a more performant database schema that better met our needs. Then, we implemented OracleStore with specific goals built-in from the start, such as ensuring the deduplication of data.
In this talk we will discuss the details behind OracleStore and the gains that were realized with this alternative implementation. These include a reduction of 97%+ in the storage footprint of multiple tables, as well as query performance that is 13x faster than ObjectStore with DirectSQL and 46x faster than ObjectStore without DirectSQL.
AMIS organiseerde op maandagavond 15 juli het seminar ‘Oracle database 12c revealed’. Deze avond bood AMIS Oracle professionals de eerste mogelijkheid om de vernieuwingen in Oracle database 12c in actie te zien! De AMIS specialisten die meer dan een jaar bèta testen hebben uitgevoerd lieten zien wat er nieuw is en hoe we dat de komende jaren gaan inzetten!
Deze presentatie is deze avond gegeven als een plenaire sessie!
Similar to Secrets of highly_avail_oltp_archs (20)
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
2. Introduction
• Provides real-world best practices for a mission-critical
OLTP system.
• Focused tuning of the application to database interface
can yield a 50% reduction in response time.
• Secrets and best practices that every developer can
use for tuning an already tuned system.
3. Background
• New SLA’s required that
we tune the system.
• Project is very large
• Waterfall development
lifecycle.
• One application engineer
and one database
engineer performed the
majority of the work.
• Originally tuned using
traditional methods.
• 75% of the time was spent
in the database.
Command SLA Response
Times
CHECK-
DOMAIN
25 milliseconds
ADD-
DOMAIN
50 milliseconds
MODIFY-
DOMAIN
100
milliseconds
DELETE-
DOMAIN
100
milliseconds
4. First Secret Discovered
Challenge: Identify where time is actually being spent.
How:
1.) Traced the database sessions.
2.) Identified what database calls were being made
and how many milliseconds each individual call took.
3.) Used Trace Analyzer from Oracle to parse the
trace files.
Result: For example, we found two business rules calling
the same operation to retrieve customer data. By
removing the redundant call we saved 8 milliseconds.
Lesson Learned: Every database call counts.
5. Second Secret Discovered
Challenge: Reduce round trips between the database and application.
• Most important item we found.
• Every database round trip involves additional overhead due to the
network latency.
6. How: Pass data in an object form using arrays.
Use Arrays
• Consider the sample Orders schema above.
• An order for a computer is a good example. A single order
contains order items such as the computer, monitor, keyboard,
and mouse.
7. Traditional Versus Array
• Traditional Method
–Call a procedure to insert an Order into the Orders table.
–Loop through the Order Items, passing them individually to a
procedure to be inserted into the Order Items table.
In the previous example a total of five database calls occur.
• Array Method
–Call a procedure to add the Order and an array of Order Items
in one procedure call.
• Internally insert the Order into the Orders table and its
associated Order Items into the Order Items table.
Using this method, five database calls are replaced by one.
8. Using Arrays
Below is an example of how to start using arrays.
1.) A schema level type needs to be created:
CREATE TYPE OrderItemsType AS object (Order_Id NUMBER(12),
Line_Item_Id NUMBER(3), ……..);
2.) Create a table of the type:
CREATE TYPE OrderItemsTab AS table OF OrderItemsType;
3.) Create a procedure that includes both the order and an array of the
order items:
CREATE PROCEDURE Add_Order_and_Items(
pOrder_Id IN Orders.Order_id%TYPE,
pOrder_Date IN Orders.Order_Date%TYPE,
……..
pOrder_Items IN OrderItemsTab);
Lesson Learned: The overhead of making database calls is expensive.
9. Third Secret Discovered
Challenge: Process Data in Bulk.
How: Forall, Bulk Collect, or the example below:
Result: Using one insert statement we were able to insert
multiple records without a looping control structure.
INSERT INTO order_items (order_id, line_item_id,….)
SELECT order_id, line_item_id,…….
FROM TABLE (CAST (pOrder_Items AS OrderItemsTab));
Lesson Learned: Processing in Bulk increases performance.
10. Fourth Secret Discovered
Challenge: Return less data.
How: Identify data not being used by the application.
Result: In our case a system generated sequence value was
returned but never used. We were able to stop returning
the value and saw a performance increase.
Lesson Learned: The process of returning even a single
value can make a difference.
12. Fastest Way to Pass Data
Identify the fastest way to pass data to the application.
• Three mainstream ways are:
– Primitive Data Types
– In/Out Parameters in PL/SQL
– Result Sets
• Primitive Data Types: fastest way to pass back a
single data value.
• In/Out Parameters: fastest way to pass back multiple
single data values.
• Result Sets: fastest way to pass back record sets but
are not best suited to returning single records.
13. NOCOPY
• By default Oracle creates temporary copies of
IN/OUT and OUT variable data within stored
procedures.
• The NOCOPY hint passes only a reference to the
object instead of the entire object.
• NOCOPY hint is added to a prodecures signature for
each IN/OUT and OUT variable.
Pro: Speeds up execution.
Con: Partial data can be returned back to the client
during error conditions.
14. Locking Strategies
• The locking strategies concentrate on updating or
deleting records.
• Inserts do not require locking because the object never
exists until the insert.
• Two main ways to lock data:
For short user or API transactions in B2B environments:
1. Lock and get the data.
2. Update or delete the data.
3. Commit the transaction.
15. Locking Strategies Cont’d
Longer user interface type transactions:
1. Get the data.
2. When the data is updated use a version identifier to
ensure that the record has not been updated by
someone else.
• Include the version identifier in the where clause of the
update statement.
3. Verify the transaction.
a. If the statement updates zero rows then the
record was already modified and the transaction
should be rolled back.
b. If the statement updates the proper number of
rows then the record updated successfully.
16. JDBC
JDBC features are constantly evolving, so become
aware of the latest features.
• JDBC Arrays.
– Tighter integration.
– Can send objects to pl/sql.
• JDBC Batching.
– Greatly increases performance.
– Reduces Round Trips.
– Client sends a group of statements in a batch.
17. JDBC Cont’d
Setting the row prefetch
• When retrieving large amounts of data setting the
setDefaultRowPrefetch() attribute in the application
code can move more data with each call.
Use the latest JDBC driver provided by the database
vendor.
• Upgraded to the Oracle 10g database driver and saw
a 10 millisecond improvement.
• New driver also had new features.
18. Lessons Learned
• Writing code the right way takes the same
amount of effort as writing it the wrong way.
• It should not be assumed that the
communication between the application and
database is as efficient as possible.
• In our experience tuning the application to
database interface yielded a 50% reduction in
response time.
19. Lessons Learned Cont’d
• When passing data use primitive data types
whenever possible. Use result sets only when
required.
• Use JDBC Array and Batching features if
possible.
• Use the fastest JDBC driver available, don’t
assume you have the fastest.
20. Final Thoughts
• Knowing the secrets and best practices listed
today will make any application more scalable
by virtue of using fewer scarce resources.
• These best practices should be incorporated
in a standard development lifecycle to ensure
your system is always operating as efficiently
as possible.