This document summarizes an Exadata consolidation success story. It discusses how three half rack Exadata clusters were used to consolidate 36 development/test databases, 11 production databases, and 13 additional development/test and standby databases. The consolidation achieved cost savings through reduced Oracle licensing, lower third party software costs, more efficient resource utilization, and reduced facility and administration expenses. It describes the tools and methodology used to plan and implement the consolidation, including gathering utilization metrics, creating a provisioning plan, implementing the plan, and auditing the results.
Debugging Tools & Techniques for Persistent Memory ProgrammingIntel® Software
Learn about pmempool, a Persistent Memory Development Kit tool that helps you prevent, diagnose, and recover from data corruption. The session also covers other debugging tools for persistent memory programming.
Your Linux AMI: Optimization and Performance (CPN302) | AWS re:Invent 2013Amazon Web Services
Your AMI is one of the core foundations for running applications and services effectively on Amazon EC2. In this session, you'll learn how to optimize your AMI, including how you can measure and diagnose system performance and tune parameters for improved CPU and network performance. We'll cover application-specific examples from Netflix on how optimized AMIs can lead to improved performance.
IBM provides a number of free tools to assist in monitoring and diagnosing issues when running any Java application: from Hello World to IBM or third party middleware based applications. This session will introduce you to those tools, highlight how they have been extended with IBM middleware product knowledge, how they have been integrated into IBMs development tools, and show you how to use them to investigate and resolve real world problem scenarios.
Presented at IBM Impact 2013
NERSC is the production high-performance computing (HPC) center for the United States Department of Energy (DOE) Office of Science. The center supports over 6,000 users in 600 projects, using a variety of applications in materials science, chemistry, biology, astrophysics, high energy physics, climate science, fusion science, and more.
NERSC deployed the Cori system on over 9,000 Intel® Xeon Phi™ processors. This session describes the optimization strategy for porting codes that target traditional manycore architectures to the processors. We also discuss highlights and lessons learned from the optimization process on 20 applications associated with the NERSC Exascale Science Application Program (NESAP).
A duplicate (clone or snapshot) database is useful for a variety of purposes, most of which involve testing &
upgrade
• You can perform the following tasks in a duplicate database:
• Test backup and recovery procedures
• Test an upgrade to a new release of Oracle Database
• Test the effect of applications on database performance
• Create a standby database (Dataguard) with DG Broker
• Leverage on Transient Logical Standby to perform an upgrade
• Generate reports
Debugging Tools & Techniques for Persistent Memory ProgrammingIntel® Software
Learn about pmempool, a Persistent Memory Development Kit tool that helps you prevent, diagnose, and recover from data corruption. The session also covers other debugging tools for persistent memory programming.
Your Linux AMI: Optimization and Performance (CPN302) | AWS re:Invent 2013Amazon Web Services
Your AMI is one of the core foundations for running applications and services effectively on Amazon EC2. In this session, you'll learn how to optimize your AMI, including how you can measure and diagnose system performance and tune parameters for improved CPU and network performance. We'll cover application-specific examples from Netflix on how optimized AMIs can lead to improved performance.
IBM provides a number of free tools to assist in monitoring and diagnosing issues when running any Java application: from Hello World to IBM or third party middleware based applications. This session will introduce you to those tools, highlight how they have been extended with IBM middleware product knowledge, how they have been integrated into IBMs development tools, and show you how to use them to investigate and resolve real world problem scenarios.
Presented at IBM Impact 2013
NERSC is the production high-performance computing (HPC) center for the United States Department of Energy (DOE) Office of Science. The center supports over 6,000 users in 600 projects, using a variety of applications in materials science, chemistry, biology, astrophysics, high energy physics, climate science, fusion science, and more.
NERSC deployed the Cori system on over 9,000 Intel® Xeon Phi™ processors. This session describes the optimization strategy for porting codes that target traditional manycore architectures to the processors. We also discuss highlights and lessons learned from the optimization process on 20 applications associated with the NERSC Exascale Science Application Program (NESAP).
A duplicate (clone or snapshot) database is useful for a variety of purposes, most of which involve testing &
upgrade
• You can perform the following tasks in a duplicate database:
• Test backup and recovery procedures
• Test an upgrade to a new release of Oracle Database
• Test the effect of applications on database performance
• Create a standby database (Dataguard) with DG Broker
• Leverage on Transient Logical Standby to perform an upgrade
• Generate reports
Data Policies for the Kafka-API with WebAssembly | Alexander Gallego, VectorizedHostedbyConfluent
Enforcing format, changing schema, introducing privacy filters have always been a challenge with the classical Kafka-API. In this talk we'll cover how to extend existing applications with webassembly, allowing developers to change the shape of data at runtime, per application without creating additional topics. By leveraging WebAssembly, we can extend the capabilities of the Kafka-API beyond what it was initially imagined. Come and learn about the future of the Kafka-API
Amazon EC2 provides a broad selection of instance types to deliver high performance for a diverse mix of applications. In this session, we overview the drivers of system performance and discuss in depth how Amazon EC2 instances deliver system performance while also providing elasticity and complete control over your infrastructure. We also detail best practices and share performance tips for getting the most out of your Amazon EC2 instances.
Choosing the Right EC2 Instance and Applicable Use Cases - AWS June 2016 Webi...Amazon Web Services
Amazon Elastic Compute Cloud (Amazon EC2) provides a broad selection of instance types to accommodate a diverse mix of workloads. In this technical session, we provide an overview of the Amazon EC2 instance platform, key platform features, and the concept of instance generations. We dive into the design choices of the different instance families, including the General Purpose, Compute Optimized, Storage Optimized, and Memory Optimized families. We also detail best practices and share performance tips for getting the most out of your Amazon EC2 instances.
Learning Objectives: • Understand the differences between instances • Learn best practices and tips for getting the most out of EC2 instances
Amazon EC2 provides a broad selection of instance types to deliver high performance for a diverse mix of applications. In this session, we overview the drivers of system performance and discuss in depth how Amazon EC2 instances deliver system performance while also providing elasticity and complete control over your infrastructure. We also detail best practices and share performance tips for getting the most out of your Amazon EC2 instances.
Apache Spark AI Use Case in Telco: Network Quality Analysis and Prediction wi...Databricks
In this talk, we will present how we analyze, predict, and visualize network quality data, as a spark AI use case in a telecommunications company. SK Telecom is the largest wireless telecommunications provider in South Korea with 300,000 cells and 27 million subscribers. These 300,000 cells generate data every 10 seconds, the total size of which is 60TB, 120 billion records per day.
In order to address previous problems of Spark based on HDFS, we have developed a new data store for SparkSQL consisting of Redis and RocksDB that allows us to distribute and store these data in real time and analyze it right away, We were not satisfied with being able to analyze network quality in real-time, we tried to predict network quality in near future in order to quickly detect and recover network device failures, by designing network signal pattern-aware DNN model and a new in-memory data pipeline from spark to tensorflow.
In addition, by integrating Apache Livy and MapboxGL to SparkSQL and our new store, we have built a geospatial visualization system that shows the current population and signal strength of 300,000 cells on the map in real time.
Amazon EC2 provides a broad selection of instance types to deliver high performance for a diverse mix of applications. In this session, we overview the drivers of system performance and discuss in depth how Amazon EC2 instances deliver system performance while also providing elasticity and complete control over your infrastructure. We also detail best practices and share performance tips for getting the most out of your Amazon EC2 instances.
Optimizing your Infrastrucure and Operating System for HadoopDataWorks Summit
Apache Hadoop is clearly one of the fastest growing big data platforms to store and analyze arbitrarily structured data in search of business insights. However, applicable commodity infrastructures have advanced greatly in the last number of years and there is not a lot of accurate, current information to assist the community in optimally designing and configuring
Hadoop platforms (Infrastructure and O/S). In this talk we`ll present guidance on Linux and Infrastructure deployment, configuration and optimization from both Red Hat and HP (derived from actual performance data) for clusters optimized for single workloads or balanced clusters that host multiple concurrent workloads.
Any DBA from beginner to advanced level, who wants to fill in some gaps in his/her knowledge about Performance Tuning on an Oracle Database, will benefit from this workshop.
Using SQL Plan Management (SPM) to Balance Plan Flexibility and Plan StabilityEnkitec
This presentation is about understanding all 3 components of SPM and how we can use this technology to efficiently migrate "good" Execution Plans from one Release to another, or from one System to another.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
6. The Stats
Three Half Rack Exadata clusters with High Cap. drives
Cluster #1
36 Dev/Test Databases
Cluster #2
11 Production Databases
Cluster #3
13 Dev/Test Databases
6 Standby Databases
Still more databases to come…
www.enkitec.com 6
7. Why Consolidate?
Primary drivers for consolidation center around cost savings
• Reduces Oracle software licensing
• 3rd party products such as backup agents, ETL tools, etc…
• More efficient use of system resources
• Soft Costs
– Floor space
– Power & Cooling
– Administration, Staffing Costs
(training, etc.)
www.enkitec.com 7
9. A Simple Consolidation Example
Let’s say we have the following databases to migrate on Exadata:
Cluster Level
Utilization
For example, the first row should read…
Database ‘A’ requires 4 CPU’s and will run on nodes 1 and 2 (2 CPU’s each)
www.enkitec.com 9
10. A Simple Consolidation Example
Let’s say we have the following databases to migrate on Exadata:
Per compute node
Utilization
For example, the first row should read…
Database ‘A’ requires 4 CPU’s and will run on nodes 1 and 2 (2 CPU’s each)
www.enkitec.com 10
11. A Simple Consolidation Example
25% 42% 33% 17%
Cluster Level Per compute node Utilization
Utilization = 29.2%
www.enkitec.com 11
12. A Simple Consolidation Example
8% 83% 17% 8%
Cluster Level Per compute node Utilization
Utilization = 29.2%
www.enkitec.com 12
13. Tools And Methodology
• Gather Utilization Metrics (usage history)
• Create Provisioning Plan
• Implement Plan
• Audit Your Implementation
www.enkitec.com 13
15. Provisioning Worksheet
• Capacity Planning
U =R tilization equirements /C apacity
• Communication Tool
**Supplement to existing Exadata installation tools:
• Site planning checklist
• Configuration Worksheet
• Exadata Configurator sheet
• CheckIP
• OneCommand
• Hand off
www.enkitec.com 15
16. Capacity
2 = quarter rack CPU_COUNT,
Space will also depend on:
• ASM redundancy
4 = half rack threads, & cores
• DATA/RECO allocation
http://goo.gl/CunHN
8 = full rack http://goo.gl/I3fjn
96 to 144GB
Query Low (4x)
(frequency of the
SPECint_rate2006 Query High (6x)
memory DIMMs
http://goo.gl/doBI5 Archive Low (7x)
drops to 800 MHz
Archive High (12x)
from 1333 MHz)
www.enkitec.com 16
17. CPU Core Comparison
Sun Fire X4170 M2 X5670@2.93GHz
Source Destination
chip efficiency factor = source SPEC rating / Exadata SPEC rating
= 16/26 how much of the
multiplier for amount of CPU
equivalent resources that will
= .6154 source CPU cores
are being used
database be offloaded to the
machine cores storage cells
EXA cores requirement = source host cores * utilization * chip efficiency factor * offload factor
= 32 * .7 * .6154 * .5
= --------- 6.89
13.78
www.enkitec.com 17
18. The Perfect Storm
(Peoplesoft HR)
Month-end Processing
+ Weekly Time Entry
+ SQL Plan Change
------------------------------------
Uh-oh!
www.enkitec.com 18
19. CPU Allocation
node node node node
DB Uniq Name DB Name 1 2 3 4
4 instance 5 instance 4 instance 3 instance
47% cpu used 75% cpu used 47% cpu used 18% cpu used
49% mem used 66% mem used 71% mem used 54% mem used
BIPRDDAL biprd P P
DBFSPRD DBFSPRD P P P P
HCMPRDDAL hcmprd P P
MTAPRD11DAL mtaprd11 P P
PAPRDDAL paprd P P
RMPRDDAL rmprd P P
dbm dbm F F F F
Fsprddal fsprd P P
= Preferred
= Failover
www.enkitec.com 19
20. Load Map
(our first stop…)
Users Complaint: HR time entry and OBIEE reports painfully slow…
www.enkitec.com 20
22. Instance Activity – HCMPRD2
Node 2
Problem: A single SQL stmt. overwhelming HCMPRD Caged SQL Profile Installed
CPU resources. at 12 CPU’s to lock in good plan.
www.enkitec.com 22
36. Smart Scan in Action. The cells are scanning 1T but only returning 144G…
***That’s on each of the highlighted row source below…
www.enkitec.com 36
37. The databases on other nodes see the contention as “System I/O”
Without I/O resource management even critical processes are affected (CKPT, LGWR, …)
www.enkitec.com 37
38. Inter-database IORM Plan
(only kicks in when needed)
I/O requests from critical processes like CKPT, LGWR, LMON get priority automatically.
Without IORM I/O requests from these important processes receive the same priority
as any other process.
*Side Benefit (automatic when IORM is enabled)
www.enkitec.com 38