The document discusses Adaptive Cursor Sharing (ACS) in Oracle Database 11g, which allows a query with bind variables to have multiple optimal execution plans generated based on the values that are bound to the variables, in order to avoid hard parsing and improve performance. ACS monitors the amount of data accessed by a query and will generate multiple plans if the data volume varies significantly. It works by evaluating the selectivity of predicates at parse time and selecting a plan that matches the selectivity profile.
Many developers/dbas are still applying the same tuning techniques in version 10 and 11 of the database as they did in version 8i when they first adopted the Cost Based Optimiser. This presentation is designed to bring your SQL tuning skills up to date. It will discuss differences in the way the optimiser behaves in version 11g, cover new hints, and look at new tuning techniques
Understanding How is that Adaptive Cursor Sharing (ACS) produces multiple Opt...Carlos Sierra
Adaptive Cursor Sharing (ACS) is a feature available since 11g. It is enabled by default. ACS can help to generate multiple non-persistent Optimal Execution Plans for a given SQL. But it requires a sequence of events for it to get truly activated. This presentation describes what is ACS, when it is used and when it is not. Then it demonstrates ACS capabilities and limitations with a live demo.
This session is about: How Adaptive Cursor Sharing (ACS) actually works. How a bind sensitive cursor becomes bind aware. What are those "ACS buckets". How the "Selectivity Profile" works. Why sometimes your SQL becomes bind aware and why sometimes it does not. How is that ACS interacts with SQL Plan Management (SPM). These and other questions about ACS are answered in detail.
Some live demonstrations are used to illustrate the ramp-up process on ACS and how some child cursors are created then flagged as non-shareable. You will also "see" how the ACS Selectivity Profile is adapted as new executions make use of predicates with new Selectivities. ACS promotes Plan Flexibility while SPM promotes Plan Stability. Understanding how these duo interacts becomes of great value when some gentle intervention is needed to restore this delicate balance.
This session is for those Developers and DBAs that "need" to understand how things work. ACS can be seen as a back-box; or you can "look" inside and understand how it actually works. If you are curious about the ACS functionality, then this Session brings some light. Consider this session only if you are pretty familiar with Cursor Sharing, Binds, Plan Stability and Plan Flexibility.
Heuristic design of experiments w meta gradient searchGreg Makowski
Once you have started learning about predictive algorithms, and the basic knowledge discovery in databases process, what is the next level of detail to learn for a consulting project?
* Give examples of the many model training parameters
* Track results in a "model notebook"
* Use a model metric that combines both accuracy and generalization to rank models
* How to strategically search over the model training parameters - use a gradient descent approach
* One way to describe an arbitrarily complex predictive system is by using sensitivity analysis
Meeting the challenges of OLTP Big Data with ScyllaScyllaDB
Scylla has emerged as the most cost-effective way to serve large amounts of data. In this keynote we will cover what makes Scylla such a great match for big data applications, what we’ve done over the past year, and our plans for the near future: performance, reliability, manageability, and above all, making Scylla easier to consume in the modern cloud environment.
Tales From The Front: An Architecture For Multi-Data Center Scalable Applicat...DataStax Academy
- Quick review of Cassandra functionality that applies to this use case
- Common Data Center and application architectures for highly available inventory applications, and why the were designed that way
- Cassandra implementations vis-a-vis infrastructure capabilities
The impedance mismatch: compromises made to fit into IT infrastructures designed and implemented with an old mindset
Clipper: A Low-Latency Online Prediction Serving SystemDatabricks
Machine learning is being deployed in a growing number of applications which demand real-time, accurate, and robust predictions under heavy serving loads. However, most machine learning frameworks and systems only address model training and not deployment.
Clipper is a general-purpose model-serving system that addresses these challenges. Interposing between applications that consume predictions and the machine-learning models that produce predictions, Clipper simplifies the model deployment process by isolating models in their own containers and communicating with them over a lightweight RPC system. This architecture allows models to be deployed for serving in the same runtime environment as that used during training. Further, it provides simple mechanisms for scaling out models to meet increased throughput demands and performing fine-grained physical resource allocation for each model.
In this talk, I will provide an overview of the Clipper serving system and then discuss how to get started using Clipper to serve Spark and TensorFlow models in a production serving environment.
Many developers/dbas are still applying the same tuning techniques in version 10 and 11 of the database as they did in version 8i when they first adopted the Cost Based Optimiser. This presentation is designed to bring your SQL tuning skills up to date. It will discuss differences in the way the optimiser behaves in version 11g, cover new hints, and look at new tuning techniques
Understanding How is that Adaptive Cursor Sharing (ACS) produces multiple Opt...Carlos Sierra
Adaptive Cursor Sharing (ACS) is a feature available since 11g. It is enabled by default. ACS can help to generate multiple non-persistent Optimal Execution Plans for a given SQL. But it requires a sequence of events for it to get truly activated. This presentation describes what is ACS, when it is used and when it is not. Then it demonstrates ACS capabilities and limitations with a live demo.
This session is about: How Adaptive Cursor Sharing (ACS) actually works. How a bind sensitive cursor becomes bind aware. What are those "ACS buckets". How the "Selectivity Profile" works. Why sometimes your SQL becomes bind aware and why sometimes it does not. How is that ACS interacts with SQL Plan Management (SPM). These and other questions about ACS are answered in detail.
Some live demonstrations are used to illustrate the ramp-up process on ACS and how some child cursors are created then flagged as non-shareable. You will also "see" how the ACS Selectivity Profile is adapted as new executions make use of predicates with new Selectivities. ACS promotes Plan Flexibility while SPM promotes Plan Stability. Understanding how these duo interacts becomes of great value when some gentle intervention is needed to restore this delicate balance.
This session is for those Developers and DBAs that "need" to understand how things work. ACS can be seen as a back-box; or you can "look" inside and understand how it actually works. If you are curious about the ACS functionality, then this Session brings some light. Consider this session only if you are pretty familiar with Cursor Sharing, Binds, Plan Stability and Plan Flexibility.
Heuristic design of experiments w meta gradient searchGreg Makowski
Once you have started learning about predictive algorithms, and the basic knowledge discovery in databases process, what is the next level of detail to learn for a consulting project?
* Give examples of the many model training parameters
* Track results in a "model notebook"
* Use a model metric that combines both accuracy and generalization to rank models
* How to strategically search over the model training parameters - use a gradient descent approach
* One way to describe an arbitrarily complex predictive system is by using sensitivity analysis
Meeting the challenges of OLTP Big Data with ScyllaScyllaDB
Scylla has emerged as the most cost-effective way to serve large amounts of data. In this keynote we will cover what makes Scylla such a great match for big data applications, what we’ve done over the past year, and our plans for the near future: performance, reliability, manageability, and above all, making Scylla easier to consume in the modern cloud environment.
Tales From The Front: An Architecture For Multi-Data Center Scalable Applicat...DataStax Academy
- Quick review of Cassandra functionality that applies to this use case
- Common Data Center and application architectures for highly available inventory applications, and why the were designed that way
- Cassandra implementations vis-a-vis infrastructure capabilities
The impedance mismatch: compromises made to fit into IT infrastructures designed and implemented with an old mindset
Clipper: A Low-Latency Online Prediction Serving SystemDatabricks
Machine learning is being deployed in a growing number of applications which demand real-time, accurate, and robust predictions under heavy serving loads. However, most machine learning frameworks and systems only address model training and not deployment.
Clipper is a general-purpose model-serving system that addresses these challenges. Interposing between applications that consume predictions and the machine-learning models that produce predictions, Clipper simplifies the model deployment process by isolating models in their own containers and communicating with them over a lightweight RPC system. This architecture allows models to be deployed for serving in the same runtime environment as that used during training. Further, it provides simple mechanisms for scaling out models to meet increased throughput demands and performing fine-grained physical resource allocation for each model.
In this talk, I will provide an overview of the Clipper serving system and then discuss how to get started using Clipper to serve Spark and TensorFlow models in a production serving environment.
The big data platforms of many organisations are underpinned by a technology that is soon to celebrate its 45th birthday: SQL. This industry stalwart is applied in a multitude of critical points in business data flows; the results that these processes generate may significantly influence business and financial decision making. However, the SQL ecosystem has been overlooked and ignored by more recent innovations in the field of software engineering best practices such as fine grained automated testing and code quality metrics. This exposes organisations to poor application maintainability, high bug rates, and ultimately corporate risk.
We present the work we’ve been doing at Hotels.com to address these issues by bringing some advanced software engineering practices and open source tools to the realm of Apache Hive SQL. We first define the relevance of such approaches and demonstrate how automated testing can be applied to Hive SQL using HiveRunner, a JUnit based testing framework. We next consider how best to structure Hive queries to yield meaningful test scenarios that are maintainable and performant. Finally, we demonstrate how test coverage reports can highlight areas of risk in SQL codebases and weaknesses in the testing process. We do this using Mutant Swarm, an open source mutation testing tool for SQL languages developed by Hotels.com that can deliver insights similar to those produced by Java focused tools such as Jacoco and PIT.
Web Service Composition as a Planning Task: Experiments using Knowledge-Based...Erick Ornio
Motivated by the problem of automated Web service composition (WSC), we present some empirical evidence to validate the effectiveness of using knowledge-based planning techniques for solving WSC problems. In our experiments we utilize the PKS (Planning with Knowledge and Sensing) planning system which is derived from a generalization of STRIPS. In PKS, the agent’s (incomplete) knowledge is represented by a set of databases and actions are modelled as revisions to the agent’s knowledge state rather than the state of the world. We argue that, despite the intrinsic limited expressiveness of this approach, typical WSC problems can be specified and solved at the knowledge level. We show that this approach scales relatively well under changing conditions (e.g. user constraints). Finally, we discuss implementation issues and propose some architectural guidelines within the context of an agent-oriented framework for inter-operable, intelligent, multi-agent systems for WSC and provisioning.
Deep Dive of ADBMS Migration to Apache Spark—Use Cases SharingDatabricks
eBay has been using enterprise ADBMS for over a decade, and our team is working on batch workload migration from ADBMS to Spark in 2018. There has been so many experiences and lessons we got during the whole migration journey (85% auto + 15% manual migration) - during which we exposed many unexpected issues and gaps between ADBMS and Spark SQL, we made a lot of decisions to fulfill the gaps in practice and contributed many fixes in Spark core in order to unblock ourselves. It will be a really interesting and should be helpful sharing for many folks especially data/software engineers to plan and execute their migration work. And during this session we will share many very specific issues each individually we encountered and how we resolve & work-around with team in real migration processes.
Changing landscapes in data integration - Kafka Connect for near real-time da...HostedbyConfluent
In 2019 we presented “Secure Kafka at scale in true multi-tenant environment” at SFO Kafka summit. Back then, kafka was mainly used for event driven architectures, high-throughput pub/sub use cases and as a data-plane for log aggregation and for transporting metadata & metrics. A lot has changed since then - Kafka plant has grown to handle 400B incoming events in a day just in production, introduced stretch cluster pattern in addition to Active-Active cluster replication pattern. Moreover, new use cases have emerged in using Kafka for near real time stream processing and data moving pipelines across the cloud environments. Moving data in near-real time across the system is a hard problem to solve. Kafka Connect is a framework to stream data in/out of kafka reliably and can be used to achieve near-real time data moving pipelines. In this talk, we will present how kafka adoption has evolved over the last couple of years in our space and deep dive into how we approached in providing Managed Kafka Connect, a newest addition to our service portfolio.
Couchbase 5.5: N1QL and Indexing featuresKeshav Murthy
This deck contains the high-level overview of N1QL and Indexing features in Couchbase 5.5. ANSI joins, hash join, index partitioning, grouping, aggregation performance, auditing, query performance features, infrastructure features.
How to use Impala query plan and profile to fix performance issuesCloudera, Inc.
Apache Impala is an exceptional, best-of-breed massively parallel processing SQL query engine that is a fundamental component of the big data software stack. Juan Yu demystifies the cost model Impala Planner uses and how Impala optimizes queries and explains how to identify performance bottleneck through query plan and profile and how to drive Impala to its full potential.
Things YouShould Be Doing When Using Cassandra DriversRebecca Mills
Did you know there are some things you should doing when writing your data driven application using Apache Cassandra? Let’s talk about that. There are features in almost every driver that will keep your application online and running fast. You should know what they are. Even if you know these features, I’m going to tell why they work and why they’re a good idea. This will not be language specific; I will be using multiple languages and drivers. This talk should appeal to programmers in general.
Streams Don't Fail Me Now - Robustness Features in Kafka StreamsHostedbyConfluent
"Stream processing applications can experience downtime due to a variety of reasons, such as a Kafka broker or another part of the infrastructure breaking down, an unexpected record (known as a poison pill) that causes the processing logic to get stuck, or a poorly performed upgrade of the application that yields unintended consequences.
Apache Kafka's native stream processing solution, Kafka Streams, has been successfully used with little or no downtime in many companies. This has been made possible by several robustness features built into Streams over the years and best practices that have evolved from many years of experience with production-level workloads.
In this talk, I will present the unique solutions the community has found for making Streams robust, explain how to apply them to your workloads and discuss the remaining challenges. Specifically, I will talk about standby tasks and rack-aware assignments that can help with losing a single node or a whole data center. I will also demonstrate how custom exception handlers and dead letter queues can make a pipeline more resistant to bad data. Finally, I will discuss options to evolve stream topologies safely."
Goal-directed evaluation of Answer Set Programs is gaining traction thanks to its amenability to create AI systems that can, due to the evaluation mechanism used, generate explanations and justifications. s(CASP) is one of these systems and has been already used to write reasoning systems in several fields. It provides enhanced expressiveness w.r.t. other ASP systems due to its ability to use constraints, data structures, and unbound variables natively. However, the performance of existing s(CASP) implementations is not on par with other ASP systems: model consistency is checked once models have been generated, in keeping with the generate-and-test paradigm. In this work, we present a variation of the top-down evaluation strategy, termed Dynamic Consistency Checking, which interleaves model generation and consistency checking. This makes it possible to determine when a literal is not compatible with the denials associated to the global constraints in the program, prune the current execution branch, and choose a different alternative. This strategy is specially (but not exclusively) relevant in problems with a high combinatorial component. We have experimentally observed speedups of up to 90x w.r.t. the standard versions of s(CASP).
Cassandra Tools and Distributed Administration (Jeffrey Berger, Knewton) | C*...DataStax
At Knewton we operate across five different VPCs a total of 29 clusters, each ranging from 3 nodes to 24 nodes. For a team of three to maintain this is not herculean, however good tools to diagnose issues and gather information in a distributed manner are vital to moving quickly and minimizing engineering time spent.
The database team at Knewton has been successfully using a combination of Ansible and custom open sourced tools to maintain and improve the Cassandra deployment at Knewton. I will be talking about several of these tools and giving examples of how we are using them. Specifically I will discuss the cassandra-tracing tool, which analyzes the contents of the system_traces keyspace, and the cassandra-stat tool, which gives real-time output of the operations of a cassandra cluster. Distributed administration with ad-hoc Ansible will also be covered and I will walk through examples of using these commands to identify and remediate clusterwide issues.
About the Speaker
Jeffrey Berger Lead Database Engineer, Knewton
Dr. Jeffrey Berger is currently the lead database engineer at Knewton, an education tech startup in NYC. He joined the tech scene in NYC in 2013 and spent two years working with MongoDB, becoming a certified MongoDB administrator and a MongoDB Master. He received his Cassandra Administrator certification at Cassandra Summit 2015. He holds a Ph.D. in Theoretical Physics from Penn State and spent several years working on high energy nuclear interactions.
The big data platforms of many organisations are underpinned by a technology that is soon to celebrate its 45th birthday: SQL. This industry stalwart is applied in a multitude of critical points in business data flows; the results that these processes generate may significantly influence business and financial decision making. However, the SQL ecosystem has been overlooked and ignored by more recent innovations in the field of software engineering best practices such as fine grained automated testing and code quality metrics. This exposes organisations to poor application maintainability, high bug rates, and ultimately corporate risk.
We present the work we’ve been doing at Hotels.com to address these issues by bringing some advanced software engineering practices and open source tools to the realm of Apache Hive SQL. We first define the relevance of such approaches and demonstrate how automated testing can be applied to Hive SQL using HiveRunner, a JUnit based testing framework. We next consider how best to structure Hive queries to yield meaningful test scenarios that are maintainable and performant. Finally, we demonstrate how test coverage reports can highlight areas of risk in SQL codebases and weaknesses in the testing process. We do this using Mutant Swarm, an open source mutation testing tool for SQL languages developed by Hotels.com that can deliver insights similar to those produced by Java focused tools such as Jacoco and PIT.
Web Service Composition as a Planning Task: Experiments using Knowledge-Based...Erick Ornio
Motivated by the problem of automated Web service composition (WSC), we present some empirical evidence to validate the effectiveness of using knowledge-based planning techniques for solving WSC problems. In our experiments we utilize the PKS (Planning with Knowledge and Sensing) planning system which is derived from a generalization of STRIPS. In PKS, the agent’s (incomplete) knowledge is represented by a set of databases and actions are modelled as revisions to the agent’s knowledge state rather than the state of the world. We argue that, despite the intrinsic limited expressiveness of this approach, typical WSC problems can be specified and solved at the knowledge level. We show that this approach scales relatively well under changing conditions (e.g. user constraints). Finally, we discuss implementation issues and propose some architectural guidelines within the context of an agent-oriented framework for inter-operable, intelligent, multi-agent systems for WSC and provisioning.
Deep Dive of ADBMS Migration to Apache Spark—Use Cases SharingDatabricks
eBay has been using enterprise ADBMS for over a decade, and our team is working on batch workload migration from ADBMS to Spark in 2018. There has been so many experiences and lessons we got during the whole migration journey (85% auto + 15% manual migration) - during which we exposed many unexpected issues and gaps between ADBMS and Spark SQL, we made a lot of decisions to fulfill the gaps in practice and contributed many fixes in Spark core in order to unblock ourselves. It will be a really interesting and should be helpful sharing for many folks especially data/software engineers to plan and execute their migration work. And during this session we will share many very specific issues each individually we encountered and how we resolve & work-around with team in real migration processes.
Changing landscapes in data integration - Kafka Connect for near real-time da...HostedbyConfluent
In 2019 we presented “Secure Kafka at scale in true multi-tenant environment” at SFO Kafka summit. Back then, kafka was mainly used for event driven architectures, high-throughput pub/sub use cases and as a data-plane for log aggregation and for transporting metadata & metrics. A lot has changed since then - Kafka plant has grown to handle 400B incoming events in a day just in production, introduced stretch cluster pattern in addition to Active-Active cluster replication pattern. Moreover, new use cases have emerged in using Kafka for near real time stream processing and data moving pipelines across the cloud environments. Moving data in near-real time across the system is a hard problem to solve. Kafka Connect is a framework to stream data in/out of kafka reliably and can be used to achieve near-real time data moving pipelines. In this talk, we will present how kafka adoption has evolved over the last couple of years in our space and deep dive into how we approached in providing Managed Kafka Connect, a newest addition to our service portfolio.
Couchbase 5.5: N1QL and Indexing featuresKeshav Murthy
This deck contains the high-level overview of N1QL and Indexing features in Couchbase 5.5. ANSI joins, hash join, index partitioning, grouping, aggregation performance, auditing, query performance features, infrastructure features.
How to use Impala query plan and profile to fix performance issuesCloudera, Inc.
Apache Impala is an exceptional, best-of-breed massively parallel processing SQL query engine that is a fundamental component of the big data software stack. Juan Yu demystifies the cost model Impala Planner uses and how Impala optimizes queries and explains how to identify performance bottleneck through query plan and profile and how to drive Impala to its full potential.
Things YouShould Be Doing When Using Cassandra DriversRebecca Mills
Did you know there are some things you should doing when writing your data driven application using Apache Cassandra? Let’s talk about that. There are features in almost every driver that will keep your application online and running fast. You should know what they are. Even if you know these features, I’m going to tell why they work and why they’re a good idea. This will not be language specific; I will be using multiple languages and drivers. This talk should appeal to programmers in general.
Streams Don't Fail Me Now - Robustness Features in Kafka StreamsHostedbyConfluent
"Stream processing applications can experience downtime due to a variety of reasons, such as a Kafka broker or another part of the infrastructure breaking down, an unexpected record (known as a poison pill) that causes the processing logic to get stuck, or a poorly performed upgrade of the application that yields unintended consequences.
Apache Kafka's native stream processing solution, Kafka Streams, has been successfully used with little or no downtime in many companies. This has been made possible by several robustness features built into Streams over the years and best practices that have evolved from many years of experience with production-level workloads.
In this talk, I will present the unique solutions the community has found for making Streams robust, explain how to apply them to your workloads and discuss the remaining challenges. Specifically, I will talk about standby tasks and rack-aware assignments that can help with losing a single node or a whole data center. I will also demonstrate how custom exception handlers and dead letter queues can make a pipeline more resistant to bad data. Finally, I will discuss options to evolve stream topologies safely."
Goal-directed evaluation of Answer Set Programs is gaining traction thanks to its amenability to create AI systems that can, due to the evaluation mechanism used, generate explanations and justifications. s(CASP) is one of these systems and has been already used to write reasoning systems in several fields. It provides enhanced expressiveness w.r.t. other ASP systems due to its ability to use constraints, data structures, and unbound variables natively. However, the performance of existing s(CASP) implementations is not on par with other ASP systems: model consistency is checked once models have been generated, in keeping with the generate-and-test paradigm. In this work, we present a variation of the top-down evaluation strategy, termed Dynamic Consistency Checking, which interleaves model generation and consistency checking. This makes it possible to determine when a literal is not compatible with the denials associated to the global constraints in the program, prune the current execution branch, and choose a different alternative. This strategy is specially (but not exclusively) relevant in problems with a high combinatorial component. We have experimentally observed speedups of up to 90x w.r.t. the standard versions of s(CASP).
Cassandra Tools and Distributed Administration (Jeffrey Berger, Knewton) | C*...DataStax
At Knewton we operate across five different VPCs a total of 29 clusters, each ranging from 3 nodes to 24 nodes. For a team of three to maintain this is not herculean, however good tools to diagnose issues and gather information in a distributed manner are vital to moving quickly and minimizing engineering time spent.
The database team at Knewton has been successfully using a combination of Ansible and custom open sourced tools to maintain and improve the Cassandra deployment at Knewton. I will be talking about several of these tools and giving examples of how we are using them. Specifically I will discuss the cassandra-tracing tool, which analyzes the contents of the system_traces keyspace, and the cassandra-stat tool, which gives real-time output of the operations of a cassandra cluster. Distributed administration with ad-hoc Ansible will also be covered and I will walk through examples of using these commands to identify and remediate clusterwide issues.
About the Speaker
Jeffrey Berger Lead Database Engineer, Knewton
Dr. Jeffrey Berger is currently the lead database engineer at Knewton, an education tech startup in NYC. He joined the tech scene in NYC in 2013 and spent two years working with MongoDB, becoming a certified MongoDB administrator and a MongoDB Master. He received his Cassandra Administrator certification at Cassandra Summit 2015. He holds a Ph.D. in Theoretical Physics from Penn State and spent several years working on high energy nuclear interactions.
Similar to Using adaptive cursor sharing (acs) to produce multiple optimal plans v2 (20)
Any DBA from beginner to advanced level, who wants to fill in some gaps in his/her knowledge about Performance Tuning on an Oracle Database, will benefit from this workshop.
Using SQL Plan Management (SPM) to Balance Plan Flexibility and Plan StabilityEnkitec
This presentation is about understanding all 3 components of SPM and how we can use this technology to efficiently migrate "good" Execution Plans from one Release to another, or from one System to another.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
7. Before Bind Peeking
Before 9i CBO was blind to values passed
§ Predicate
– WHERE channel_id = :b1
§ Unknowns
– Is :b1 between low and high values of channel_id?
– Is :b1 a popular value of channel_id?
– Are there any rows with value :b1 for channel_id?
§ Penalty
– Possible suboptimal plans
7
8. With Bind Peeking
9i offers a partial solution
§ Predicate
– WHERE channel_id = :b1
§ Plan is determined by peeked values
– EXEC :b1 := 9;
§ Optimal plan for 1st execution
– CBO can use low/high and histograms on channel_id
§ Penalty
– Possible suboptimal plans for subsequent executions on skewed data
8
9. Real-life Problem with Bind Peeking
People Soft Payroll Application
§ WHERE employee BETWEEN :b1 AND :b2
§ Payroll for one employee
– :b1 = 123456
– :b2 = 123456
§ Payroll for one company
– :b1 = 000001
– :b2 = 999999
§ Doing payroll for an employee first then entire company
9
10. With Adaptive Cursor Sharing
11g improves cursor sharing
§ Some queries are ACS candidates
§ Sophisticated non-persistent mechanism
§ Selectivity of predicates determine plan
§ Multiple optimal plans for a query!
– If ACS is successfully applied
§ Penalty
– Marginal increase in CPU and memory overhead
10
12. ACS high-level Overview
High level overview
§ If SQL with binds meets some requirements
– Flag cursor as bind sensitive
– Start monitoring data volume manipulated by cursor
§ If bind sensitive and data volume manipulated by cursor varies
significantly
– Flag cursor as bind aware
– Start generating multiple optimal plans for this query on next hard parse
§ If bind aware then use selectivity of predicates to decide on plan
12
13. Bind Sensitive
Minimum requirements
§ SQL has explicit binds
– Or literals and cursor_sharing is “force”
§ Predicate: column + operand + bind_variable
– Equality operand “=“ and histogram on column
§ Ex: channel_id = :b1
– Non-equality operand (range) regardless of histogram on column
§ “>”, “>=“, “<“, ‘<=“, BETWEEN, LIKE
13
14. Bind Aware
How to become bind aware?
§ Significant changes in data volume manipulated by cursor
– A few rows versus a few thousands of rows
– A few thousands of rows versus a few millions of rows
§ Specifying /*+ BIND_AWARE */ CBO Hint
– Bypasses the monitoring phase on data volume
14
15. Plan Selection
Based on selectivity profile of predicates
§ Evaluate selectivity of predicates at soft parse
§ Compare to a non-persistent selectivity profile
§ If within ranges of a known profile then select associated plan
§ Else hard parse
– Compute and execute newly generated plan
– Create selectivity profile for new plan or update profile of existing plan
§ If ranges on selectivity profiles overlap then merge profiles
15
16. V$ dynamic views for ACS
ACS non-persistent performance views
§ V$SQL
– Shareable, bind sensitive and bind aware flags
§ V$SQL_CS_STATISTICS
– Data volume manipulated (rows processed)
§ V$SQL_CS_HISTOGRAM
– Record keeping of data volume per execution (small, medium, large)
§ V$SQL_CS_SELECTIVITY
– Predicates selectivity profiles
16
29. Demo 6: When Cursor becomes Bind Aware?
Obtain rows processed from demo 1-5 then guesstimate aware flag
Query
:b1
:b2
Optimal
q1
9
33
N1/N2
q2
5
32
N1/N2
q3
2
q4
q5
Rows
Processed
Bucket
Aware
37,382
1
N
2
0
N
999 FTS/FTS
8,021,324
2
Y
9
999
N1/FTS
6,233,815
2
Y
2
33
FTS/N2
1,825,131
2
Child
Actual
Y
29
30. Demo 6: When Cursor becomes Bind Aware?
Obtain rows processed from demo 1-5 then guesstimate aware flag
Query
:b1
:b2
Optimal
q1
9
33
N1/N2
q2
5
32
N1/N2
q3
2
q4
q5
Rows
Processed
Bucket
Aware
Child
Actual
37,382
1
N
0
N1/N2
2
0
N
0
N1/N2
999 FTS/FTS
8,021,324
2
Y
1
FTS/FTS
9
999
N1/FTS
6,233,815
2
Y
2
N1/FTS
2
33
FTS/N2
1,825,131
2
Y
3
FTS/N2
30
31. Demo 6: When Cursor becomes Bind Aware?
Obtain rows processed from demo 1-5 then guesstimate aware flag
Query
:b1
:b2
Optimal
q1
9
33
N1/N2
q2
5
32
N1/N2
q3
2
q4
q5
Rows
Processed
Bucket
Aware
Child
Actual
37,382
1
N
0
N1/N2
2
0
N
0
N1/N2
999 FTS/FTS
8,021,324
2
Y
1
FTS/FTS
9
999
N1/FTS
6,233,815
2
Y
2
N1/FTS
2
33
FTS/N2
1,825,131
2
Y
3
FTS/N2
31
32. Demo 7: When Cursor becomes Bind Aware?
Compute bucket and guesstimate aware flag and actual plan
Rows
Processed
Query
:b1
:b2
Optimal
Bucket
q5
2
33
FTS/N2
1,825,131
q4
9
999
N1/FTS
6,233,815
q3
2
999 FTS/FTS
q2
5
32
N1/N2
2
q1
9
33
N1/N2
Aware
Child
Actual
37,382
8,021,324
32
33. Demo 7: When Cursor becomes Bind Aware?
Compute bucket and guesstimate aware flag and actual plan
Rows
Processed
Query
:b1
:b2
Optimal
Bucket
q5
2
33
FTS/N2
1,825,131
2
q4
9
999
N1/FTS
6,233,815
2
q3
2
999 FTS/FTS
8,021,324
2
q2
5
32
N1/N2
2
0
q1
9
33
N1/N2
37,382
Aware
Child
Actual
1
33
34. Demo 7: When Cursor becomes Bind Aware?
Compute bucket and guesstimate aware flag and actual plan
Query
:b1
:b2
Optimal
q5
2
33
FTS/N2
q4
9
999
N1/FTS
q3
2
999 FTS/FTS
q2
5
32
q1
9
33
Rows
Processed
Bucket
Aware
1,825,131
2
N
6,233,815
2
N
8,021,324
2
N
N1/N2
2
0
N
N1/N2
37,382
1
Child
Actual
Y
34
35. Demo 7: When Cursor becomes Bind Aware?
Compute bucket and guesstimate aware flag and actual plan
Query
:b1
:b2
Optimal
q5
2
33
FTS/N2
q4
9
999
N1/FTS
q3
2
999 FTS/FTS
q2
5
32
q1
9
33
Rows
Processed
Bucket
Aware
Child
Actual
1,825,131
2
N
0
FTS/N2
6,233,815
2
N
0
FTS/N2
8,021,324
2
N
0
FTS/N2
N1/N2
2
0
N
0
FTS/N2
N1/N2
37,382
1
Y
1
N1/N2
35
36. Demo 7: When Cursor becomes Bind Aware?
Compute bucket and guesstimate aware flag and actual plan
Query
:b1
:b2
Optimal
q5
2
33
FTS/N2
q4
9
999
N1/FTS
q3
2
999 FTS/FTS
q2
5
32
q1
9
33
Rows
Processed
Bucket
Aware
Child
Actual
1,825,131
2
N
0
FTS/N2
6,233,815
2
N
0
FTS/N2
8,021,324
2
N
0
FTS/N2
N1/N2
2
0
N
0
FTS/N2
N1/N2
37,382
1
Y
1
N1/N2
36
37. Demo 8: When Cursor becomes Bind Aware?
Guesstimate aware flag and actual plan
Rows
Processed
Query
:b1
:b2
Optimal
Bucket
q5
2
33
FTS/N2
1,825,131
2
q4
9
999
N1/FTS
6,233,815
2
q3
2
999 FTS/FTS
8,021,324
2
q1
9
33
N1/N2
37,382
1
q2
5
32
N1/N2
2
Aware
Child
Actual
0
37
38. Demo 8: When Cursor becomes Bind Aware?
Guesstimate aware flag and actual plan
Query
:b1
:b2
Optimal
q5
2
33
FTS/N2
q4
9
999
N1/FTS
q3
2
999 FTS/FTS
q1
9
33
q2
5
32
Rows
Processed
Bucket
Aware
1,825,131
2
N
6,233,815
2
N
8,021,324
2
N
N1/N2
37,382
1
N
N1/N2
2
0
Child
Actual
N
38
39. Demo 8: When Cursor becomes Bind Aware?
Guesstimate aware flag and actual plan
Query
:b1
:b2
Optimal
q5
2
33
FTS/N2
q4
9
999
N1/FTS
q3
2
999 FTS/FTS
q1
9
33
q2
5
32
Rows
Processed
Bucket
Aware
Child
Actual
1,825,131
2
N
0
FTS/N2
6,233,815
2
N
0
FTS/N2
8,021,324
2
N
0
FTS/N2
N1/N2
37,382
1
N
0
FTS/N2
N1/N2
2
0
N
0
FTS/N2
39
40. Demo 8: When Cursor becomes Bind Aware?
Guesstimate aware flag and actual plan
Query
:b1
:b2
Optimal
q5
2
33
FTS/N2
q4
9
999
N1/FTS
q3
2
999 FTS/FTS
q1
9
33
q2
5
32
Rows
Processed
Bucket
Aware
Child
Actual
1,825,131
2
N
0
FTS/N2
6,233,815
2
N
0
FTS/N2
8,021,324
2
N
0
FTS/N2
N1/N2
37,382
1
N
0
FTS/N2
N1/N2
2
0
N
0
FTS/N2
40
41. Real-life Problem with ACS
People Soft Payroll Application
§ WHERE employee BETWEEN :b1 AND :b2
§ Payroll for one employee
– :b1 = 123456
– :b2 = 123456
§ Payroll for one company
– :b1 = 000001
– :b2 = 999999
§ Doing payroll for a few employees first then entire company
41
46. Remarks on Bind Sensitivity
Based on experimental observation
§ Monitor V$SQL_CS_STATISTICS.rows_processed
– If small number of rows then
§ V$SQL_CS_HISTOGRAM.bucket_id(0)++
– If medium number of rows then
§ V$SQL_CS_HISTOGRAM.bucket_id(1)++
– If large number of rows then
§ V$SQL_CS_HISTOGRAM.bucket_id(2)++
46
47. Remarks on Bind Aware
Based on experimental observation
§ Some cases where cursor may become bind aware
– bucket_id(0) = bucket_id(1) > 0
– bucket_id(1) = bucket_id(2) > 0
– bucket_id(0) > 0 and bucket_id(2) > 0
§ Or use /*+ BIND_AWARE */ CBO Hint
– What if we cannot modify code?
47
48. Conclusions
ACS can produce multiple optimal plans for one query
§ ACS only applies to a subset of queries with binds
§ ACS requires a ramp-up process (few executions)
§ In some cases cursor may fail to become bind aware
§ To force a cursor become bind aware use CBO Hint
§ ACS is not persistent
§ ACS works well with SQL Plan Management
48
49. Give Away
Script sqlt/utl/coe_gen_sql_patch.sql (MOS 215187.1)
§ Creates a SQL Patch for one SQL_ID
§ Turns “on” EVENT 10053 for SQL_ID
§ Hints on SQL Patch
– GATHER_PLAN_STATISTICS
– MONITOR
– BIND_AWARE
§ Consider using and customizing this free script
49
50. References and Contact Info
Oracle Optimizer Blog
§ https://blogs.oracle.com/optimizer/
– Insight into the workings of the Optimizer
§ carlos.sierra@enkitec.com
§ http://carlos-sierra.net
§ @csierra_usa
50