Autonomous Transaction Processing (ATP) - the second in the family of Oracle’s Autonomous Databases – offers Oracle DBAs the ability to apply a force multiplier for their OLTP database application workloads. However, it’s important to understand both the benefits and limitations of ATP before migrating any workloads to that environment. I'll offer a quick but deep dive into how best to take advantage of ATP - including how to load data quickly into the underlying database – and some ideas on how ATP will impact the role of Oracle DBA in the immediate future. (Hint: Think automatic transmission instead of stick-shift.)
Vote Early, Vote Often: From Napkin to Canvassing Application in a Single Wee...Jim Czuprynski
The frenetic pace of application development in modern IT organizations means it’s not unusual to demand an application be built with minimal requirement gathering – literally, from a napkin-based sketch – to a working first draft of the app within extremely short time frames – even a weekend! – with production deployment to follow just a few days later.
I'll demonstrate a real-life application development scenario – the creation of a mobile application that gives election canvassers a tool to identify, classify, and inform voters in a huge suburban Chicago voting district – using the latest Oracle application development UI, data modeling tools, and database technology. Along the way, we’ll show how Oracle APEX makes short work of building a working application while the Oracle DBA leverages her newest tools – SQL Developer and Data Modeler – to build a secure, reliable, scalable application for her development team.
Conquer Big Data with Oracle 18c, In-Memory External Tables and Analytic Func...Jim Czuprynski
There’s an onslaught of Big Data coming to our IT shops - zettabytes of it! – but instead of your application developers struggling to learn new languages and techniques to analyze it, why not leverage Oracle Database 18c?
I'll demonstrate how to tackle handling the coming Big Data tidal wave with the best tool ever designed to filter, sort, aggregate, and report information: Structured Query Language. We’ll also take a closer look at using some new Analytic Functions in 19c to make short work of complex analyses and how best to leverage 18c’s latest Database In-Memory features for External Tables. And we’ll even explore how easy it is to leverage External Tables in Autonomous Data Warehouse using the latest features of DBMS_CLOUD.
Speaker: Isabel Peters, Software Engineer, MongoDB
Track: WTC Lounge
Data backup is a critical process to keep your data safe and recoverable in case of an unexpected local storage failure. At MongoDB, we develop tools to easily backup your data, keep it safe and restore it so that you don’t have to worry or spend time thinking about the process, allowing you to focus on your various other responsibilities. Come discover what the architecture of a backup system looks like.
Getting Started with MongoDB Using the Microsoft Stack MongoDB
Speaker: John Randolph, Sr. Software Developer, Gexa Energy
Level: 100 (Beginner)
Track: Developer
Gexa has implemented several applications using MongoDB as a document repository storing multiple types of files (PDF, XLS, CSV, etc.). This entry level session is intended to share what we’ve learned in developing and deploying our first applications in an on premise, Microsoft environment. We’ll provide architectural and development information about what we’ve done. The focus is to help get your projects up-to-speed more quickly. This will be useful to teams moving from pilot to production and for developers getting started with the .Net MongoDB drivers. Plenty of code samples will be shown. We’ll discuss our successful engagement with MongoDB Consulting to help us design and deploy a high-quality production environment.
What You Will Learn:
- Ideas how to store and retrieve documents of different sizes, types, and volumes. We’ll describe the storage, partitioning and indexing techniques used that provide sub-second retrieval from collections with over 100 million records.
- The issues addressed moving to production, including: backup, disaster recovery, SSL, using replica sets, implementing authorization and authentication, changing default setting, and creating a full path-to-production set of environments.
- A successful pattern for building applications with .Net, providing teams some ideas to jump-start their development along with tips and tricks for using the .Net drivers.
Marquez: A Metadata Service for Data Abstraction, Data Lineage, and Event-bas...Willy Lulciuc
At WeWork, it's critical that we understand the complete context for all datasets. We also want to be able to explore dependencies between jobs and the datasets they produce and consume. To do this, WeWork needs metadata. In this talk I will focus on Marquez, a core service for the collection, aggregation and visualization of a data ecosystems metadata. Marquez maintains the provenance of how datasets are consumed and produced while providing global visibility into job runtime.
Vote Early, Vote Often: From Napkin to Canvassing Application in a Single Wee...Jim Czuprynski
The frenetic pace of application development in modern IT organizations means it’s not unusual to demand an application be built with minimal requirement gathering – literally, from a napkin-based sketch – to a working first draft of the app within extremely short time frames – even a weekend! – with production deployment to follow just a few days later.
I'll demonstrate a real-life application development scenario – the creation of a mobile application that gives election canvassers a tool to identify, classify, and inform voters in a huge suburban Chicago voting district – using the latest Oracle application development UI, data modeling tools, and database technology. Along the way, we’ll show how Oracle APEX makes short work of building a working application while the Oracle DBA leverages her newest tools – SQL Developer and Data Modeler – to build a secure, reliable, scalable application for her development team.
Conquer Big Data with Oracle 18c, In-Memory External Tables and Analytic Func...Jim Czuprynski
There’s an onslaught of Big Data coming to our IT shops - zettabytes of it! – but instead of your application developers struggling to learn new languages and techniques to analyze it, why not leverage Oracle Database 18c?
I'll demonstrate how to tackle handling the coming Big Data tidal wave with the best tool ever designed to filter, sort, aggregate, and report information: Structured Query Language. We’ll also take a closer look at using some new Analytic Functions in 19c to make short work of complex analyses and how best to leverage 18c’s latest Database In-Memory features for External Tables. And we’ll even explore how easy it is to leverage External Tables in Autonomous Data Warehouse using the latest features of DBMS_CLOUD.
Speaker: Isabel Peters, Software Engineer, MongoDB
Track: WTC Lounge
Data backup is a critical process to keep your data safe and recoverable in case of an unexpected local storage failure. At MongoDB, we develop tools to easily backup your data, keep it safe and restore it so that you don’t have to worry or spend time thinking about the process, allowing you to focus on your various other responsibilities. Come discover what the architecture of a backup system looks like.
Getting Started with MongoDB Using the Microsoft Stack MongoDB
Speaker: John Randolph, Sr. Software Developer, Gexa Energy
Level: 100 (Beginner)
Track: Developer
Gexa has implemented several applications using MongoDB as a document repository storing multiple types of files (PDF, XLS, CSV, etc.). This entry level session is intended to share what we’ve learned in developing and deploying our first applications in an on premise, Microsoft environment. We’ll provide architectural and development information about what we’ve done. The focus is to help get your projects up-to-speed more quickly. This will be useful to teams moving from pilot to production and for developers getting started with the .Net MongoDB drivers. Plenty of code samples will be shown. We’ll discuss our successful engagement with MongoDB Consulting to help us design and deploy a high-quality production environment.
What You Will Learn:
- Ideas how to store and retrieve documents of different sizes, types, and volumes. We’ll describe the storage, partitioning and indexing techniques used that provide sub-second retrieval from collections with over 100 million records.
- The issues addressed moving to production, including: backup, disaster recovery, SSL, using replica sets, implementing authorization and authentication, changing default setting, and creating a full path-to-production set of environments.
- A successful pattern for building applications with .Net, providing teams some ideas to jump-start their development along with tips and tricks for using the .Net drivers.
Marquez: A Metadata Service for Data Abstraction, Data Lineage, and Event-bas...Willy Lulciuc
At WeWork, it's critical that we understand the complete context for all datasets. We also want to be able to explore dependencies between jobs and the datasets they produce and consume. To do this, WeWork needs metadata. In this talk I will focus on Marquez, a core service for the collection, aggregation and visualization of a data ecosystems metadata. Marquez maintains the provenance of how datasets are consumed and produced while providing global visibility into job runtime.
A Step by Step Introduction to the MySQL Document StoreDave Stokes
Looking for a fast, flexible NoSQL document store? And one that runs with the power and reliability of MySQL. This is an intro on how to use the MySQL Document Store
Graph databases and the Panama Papers - Stefan Armbruster - Codemotion Milan ...Codemotion
In spring 2016 the first press reports regarding the "panama papers" were released. With almost 3TB of raw data this was by far the largest leak of data worldwide. This talk gives some technical insights who the ICIJ (International Consortium Of Investigate Journalists) worked with that amount of data to provide journalist an easy to use interface for doing their research. Aside other technologies one core component was a graph database. In a live demo in the panama papers dataset we'll explore to power and conciseness of the graph query language "Cypher".
MongoDB .local London 2019: MongoDB Atlas Data Lake Technical Deep DiveMongoDB
MongoDB Atlas Data Lake is a new service offered by MongoDB Atlas. Many organizations store long term, archival data in cost-effective storage like S3, GCP, and Azure Blobs. However, many of them do not have robust systems or tools to effectively utilize large amounts of data to inform decision making. MongoDB Atlas Data Lake is a service allowing organizations to analyze their long-term data to discover a wealth of information about their business.
This session will take a deep dive into the features that are currently available in MongoDB Atlas Data Lake and how they are implemented. In addition, we'll discuss future plans and opportunities and offer ample Q&A time with the engineers on the project.
An Autonomous Singularity Approaches: Force Multipliers For Overwhelmed DBAsJim Czuprynski
Autonomous Database Services have expanded well beyond their original scope of heavy analytical workloads (ADW) and hybrid transaction processing / reporting workloads (ATP) to include dedicated Cloud-based instances to eliminate contention between “noisy neighbors” in the same region and domain.
I'll explain how Oracle DBAs at any skill level can immediately leverage Autonomous resources as force multipliers to free them from most mundane administration tasks so they can concentrate on mastering the new skills required to become an Enterprise Data Architect - the emerging post-DBA role – and shift their focus towards building better enterprise systems in concert with their organization’s application developers, business analysts, and business units.
Presented at JavaOne 2013, Tuesday September 24.
"Data Modeling Patterns" co-created with Ian Robinson.
"Pitfalls and Anti-Patterns" created by Ian Robinson.
Database basics for new-ish developers -- All Things Open October 18th 2021Dave Stokes
Do you wonder why it takes your database to find the top five of your fifty six million customers? Do you really have a good idea of what NULL is and how to use it? And why are some database queries so quick and others frustratingly slow? Relational databases have been around for over fifty years and frustrating developers for at least forty nine of those years. This session is an attempt to explain why sometimes the database seems very fast and other times not. You will learn how to set up data (normalization) to avoid redundancies into tables by their function, how to join two tables to combine data, and why Structured Query Language is so very different than most other languages. And you will see how thinking in sets over records can greatly improve your life with a database.
Introduction to SQL Server Internals: How to Think Like the EngineBrent Ozar
When you pass in a query, how does SQL Server build the results? Time to role play: Brent will be an end user sending in queries, and you will play the part of the SQL Server engine. Using simple spreadsheets as your tables, you will learn how SQL Server builds execution plans, uses indexes, performs joins, and considers statistics.
This session is for DBAs and developers who are comfortable writing queries, but not so comfortable when it comes to explaining nonclustered indexes, lookups, and sargability.
MongoDB Europe 2016 - Using MongoDB to Build a Fast and Scalable Content Repo...MongoDB
MongoDB can be used in the Nuxeo Platform as a replacement for more traditional SQL databases. Nuxeo's content repository, which is the cornerstone of this open source enterprise content management platform, integrates completely with MongoDB for data storage. This presentation will explain the motivation for using MongoDB and will emphasize the different implementation choices driven by the very nature of a NoSQL datastore like MongoDB. Learn how Nuxeo integrated MongoDB into the platform which resulted in increased performance (including actual benchmarks) and better response to some use cases.
Webinar: Choosing the Right Shard Key for High Performance and ScaleMongoDB
Read these webinar slides to learn how selecting the right shard key can future proof your application.
The shard key that you select can impact the performance, capability, and functionality of your database.
Gian will offer his reflections on the Druid journey to date, plus describe his vision for what Druid will become. He will lay out the near-term Druid roadmap and take your questions.
Watch video: https://imply.io/virtual-druid-summit/apache-druid-vision-and-roadmap-gian-merlino
What's Your Super-Power? Mine is Machine Learning with Oracle Autonomous DB.Jim Czuprynski
Artificial Intelligence (AI) and Machine Learning (ML) are a lot like preserving the Earth's environment: Almost everyone is talking about what should be done to save it, but very few people have committed to actually doing something about it. I'll demonstrate a few real-life opportunities to discover unseen patterns and relationships within sample financial and election data by leveraging the AI and ML capabilities already built into Oracle Autonomous Database.
JSON, A Splash of SODA, and a SQL Chaser: Real-World Use Cases for Autonomous...Jim Czuprynski
JSON is the new XML! It’s everywhere, from NoSQL databases to REST APIs. Let me share with you how Oracle’s Autonomous JSON Database (AJD) makes short work of handling JSON-resident information, especially when paired with robust functions and features of Oracle 19c and 21c.
Apache Druid ingests and enables instant query on many billions of events in real-time. But how? In this talk, each of the components of an Apache Druid cluster is described – along with the data and query optimisations at its core – that unlock fresh, fast data for all.
A Step by Step Introduction to the MySQL Document StoreDave Stokes
Looking for a fast, flexible NoSQL document store? And one that runs with the power and reliability of MySQL. This is an intro on how to use the MySQL Document Store
Graph databases and the Panama Papers - Stefan Armbruster - Codemotion Milan ...Codemotion
In spring 2016 the first press reports regarding the "panama papers" were released. With almost 3TB of raw data this was by far the largest leak of data worldwide. This talk gives some technical insights who the ICIJ (International Consortium Of Investigate Journalists) worked with that amount of data to provide journalist an easy to use interface for doing their research. Aside other technologies one core component was a graph database. In a live demo in the panama papers dataset we'll explore to power and conciseness of the graph query language "Cypher".
MongoDB .local London 2019: MongoDB Atlas Data Lake Technical Deep DiveMongoDB
MongoDB Atlas Data Lake is a new service offered by MongoDB Atlas. Many organizations store long term, archival data in cost-effective storage like S3, GCP, and Azure Blobs. However, many of them do not have robust systems or tools to effectively utilize large amounts of data to inform decision making. MongoDB Atlas Data Lake is a service allowing organizations to analyze their long-term data to discover a wealth of information about their business.
This session will take a deep dive into the features that are currently available in MongoDB Atlas Data Lake and how they are implemented. In addition, we'll discuss future plans and opportunities and offer ample Q&A time with the engineers on the project.
An Autonomous Singularity Approaches: Force Multipliers For Overwhelmed DBAsJim Czuprynski
Autonomous Database Services have expanded well beyond their original scope of heavy analytical workloads (ADW) and hybrid transaction processing / reporting workloads (ATP) to include dedicated Cloud-based instances to eliminate contention between “noisy neighbors” in the same region and domain.
I'll explain how Oracle DBAs at any skill level can immediately leverage Autonomous resources as force multipliers to free them from most mundane administration tasks so they can concentrate on mastering the new skills required to become an Enterprise Data Architect - the emerging post-DBA role – and shift their focus towards building better enterprise systems in concert with their organization’s application developers, business analysts, and business units.
Presented at JavaOne 2013, Tuesday September 24.
"Data Modeling Patterns" co-created with Ian Robinson.
"Pitfalls and Anti-Patterns" created by Ian Robinson.
Database basics for new-ish developers -- All Things Open October 18th 2021Dave Stokes
Do you wonder why it takes your database to find the top five of your fifty six million customers? Do you really have a good idea of what NULL is and how to use it? And why are some database queries so quick and others frustratingly slow? Relational databases have been around for over fifty years and frustrating developers for at least forty nine of those years. This session is an attempt to explain why sometimes the database seems very fast and other times not. You will learn how to set up data (normalization) to avoid redundancies into tables by their function, how to join two tables to combine data, and why Structured Query Language is so very different than most other languages. And you will see how thinking in sets over records can greatly improve your life with a database.
Introduction to SQL Server Internals: How to Think Like the EngineBrent Ozar
When you pass in a query, how does SQL Server build the results? Time to role play: Brent will be an end user sending in queries, and you will play the part of the SQL Server engine. Using simple spreadsheets as your tables, you will learn how SQL Server builds execution plans, uses indexes, performs joins, and considers statistics.
This session is for DBAs and developers who are comfortable writing queries, but not so comfortable when it comes to explaining nonclustered indexes, lookups, and sargability.
MongoDB Europe 2016 - Using MongoDB to Build a Fast and Scalable Content Repo...MongoDB
MongoDB can be used in the Nuxeo Platform as a replacement for more traditional SQL databases. Nuxeo's content repository, which is the cornerstone of this open source enterprise content management platform, integrates completely with MongoDB for data storage. This presentation will explain the motivation for using MongoDB and will emphasize the different implementation choices driven by the very nature of a NoSQL datastore like MongoDB. Learn how Nuxeo integrated MongoDB into the platform which resulted in increased performance (including actual benchmarks) and better response to some use cases.
Webinar: Choosing the Right Shard Key for High Performance and ScaleMongoDB
Read these webinar slides to learn how selecting the right shard key can future proof your application.
The shard key that you select can impact the performance, capability, and functionality of your database.
Gian will offer his reflections on the Druid journey to date, plus describe his vision for what Druid will become. He will lay out the near-term Druid roadmap and take your questions.
Watch video: https://imply.io/virtual-druid-summit/apache-druid-vision-and-roadmap-gian-merlino
What's Your Super-Power? Mine is Machine Learning with Oracle Autonomous DB.Jim Czuprynski
Artificial Intelligence (AI) and Machine Learning (ML) are a lot like preserving the Earth's environment: Almost everyone is talking about what should be done to save it, but very few people have committed to actually doing something about it. I'll demonstrate a few real-life opportunities to discover unseen patterns and relationships within sample financial and election data by leveraging the AI and ML capabilities already built into Oracle Autonomous Database.
JSON, A Splash of SODA, and a SQL Chaser: Real-World Use Cases for Autonomous...Jim Czuprynski
JSON is the new XML! It’s everywhere, from NoSQL databases to REST APIs. Let me share with you how Oracle’s Autonomous JSON Database (AJD) makes short work of handling JSON-resident information, especially when paired with robust functions and features of Oracle 19c and 21c.
Apache Druid ingests and enables instant query on many billions of events in real-time. But how? In this talk, each of the components of an Apache Druid cluster is described – along with the data and query optimisations at its core – that unlock fresh, fast data for all.
Application Performance Troubleshooting 1x1 - Part 2 - Noch mehr Schweine und...rschuppe
Application Performance doesn't come easy. How to find the root cause of performance issues in modern and complex applications? All you have is a complaining user to start with?
In this presentation (mainly in German, but understandable for english speakers) I'd reprised the fundamentals of trouble shooting and have some new examples on how to tackle issues.
Follow up presentation to "Performance Trouble Shooting 101 - Schweine, Schlangen und Papierschnitte"
Top 5 things to know about sql azure for developersIke Ellis
Databases in the cloud are a brave new world. This presentation will show up the issues with migrating your application to SQL Azure and how to address them.
This presentation was prepared for a Webcast where John Yerhot, Engine Yard US Support Lead, and Chris Kelly, Technical Evangelist at New Relic discussed how you can scale and improve the performance of your Ruby web apps. They shared detailed guidance on issues like:
Caching strategies
Slow database queries
Background processing
Profiling Ruby applications
Picking the right Ruby web server
Sharding data
Attendees will learn how to:
Gain visibility on site performance
Improve scalability and uptime
Find and fix key bottlenecks
See the on-demand replay:
http://pages.engineyard.com/6TipsforImprovingRubyApplicationPerformance.html
Best Practices for Building Robust Data Platform with Apache Spark and DeltaDatabricks
This talk will focus on Journey of technical challenges, trade offs and ground-breaking achievements for building performant and scalable pipelines from the experience working with our customers.
Docker Logging and analysing with Elastic StackJakub Hajek
Collecting logs from the entire stateless environment is challenging parts of the application lifecycle. Correlating business logs with operating system metrics to provide insights is a crucial part of the entire organization. What aspects should be considered while you design your logging solutions?
Docker Logging and analysing with Elastic Stack - Jakub Hajek PROIDEA
Collecting logs from the entire stateless environment is challenging parts of the application lifecycle. Correlating business logs with operating system metrics to provide insights is a crucial part of the entire organization. We will see the technical presentation on how to manage a large amount of the data in a typical environment with microservices.
50k runs, millions of metrics, parameters or tags, some bursts at 20k QPS. That’s the volume of data managed by our MLflow tracking servers this year at Criteo. In this talk, you will learn how we set up a shared instance of MLflow at company scale. We will present our contributions to the SQLAlchemyStore to make it responsive at this scale. We will present you how we turned MLflow to a production-ready system. How we scaled horizontally a shared instance on a mesos cluster ? Our monitoring system based on prometheus. Integration to the company Single Sign-On (SSO) authentication. And how our data scientists register their runs from the largest hadoop cluster in Europe.
Modern Oracle DBAs have spent years acquiring extremely valuable skills, even while facing increased responsibility for growing numbers of diverse multi-version databases, demands to transition to public cloud computing Infrastructure, and a never-ending drumbeat for upskilling and relevance in our industry. It’s the perfect time to consider a transition in your career by leveraging your expertise with the Oracle database in a new role as a Data Engineer (DE).
Going Native: Leveraging the New JSON Native Datatype in Oracle 21cJim Czuprynski
Need to incorporate JSON documents into existing Oracle database applications? The new native JSON datatype introduced in Oracle 21c makes it simple to store, access, traverse, and filter the complex data often found within JSON documents, often without any application code changes.
Access Denied: Real-World Use Cases for APEX and Real Application SecurityJim Czuprynski
Limiting users’ access to data is still a thorny issue in many Oracle shops: How do we insure only the right people view - much less change! - only the data they’re allowed to? We’ll show you how we solved those issues for a large government agency with hundreds of external users via Real Application Security (RAS), whether they’re using APEX applications or direct-access tools like SQLcl.
Charge Me Up! Using Oracle ML, Analytics, and APEX For Finding Optimal Charge...Jim Czuprynski
Think finding a close parking space is a challenge? Finding the closest charging station for your EV when you’re running short on battery power will be the next nightmare for drivers in Smart Cities. I’ll show how to use existing Oracle Machine Learning, Analytics, and APEX to find the closest charge point while driving, as well as determine where it makes the most sense to placed charge points to benefit utility customers.
Graphing Grifters: Identify & Display Patterns of Corruption With Oracle GraphJim Czuprynski
Uncovering patterns of suspicious behavior is no longer something only an experienced gumshoe or fraud investigator can ferret out. Using Oracle’s powerful Machine Learning algorithms and Property Graph plug-ins, we’ll show how to quickly identify and display potentially suspicious financial transactions.
So an Airline Pilot, a Urologist, and an IT Technologist Walk Into a Bar: Thi...Jim Czuprynski
It’s no joke: The IT industry is undergoing a maelstrom of change – ever-increasing data volumes, horrendous security incursions, the promise / threat of Cloud-based computing, and a gradual loss of its most talented people through age-based attrition. What’s needed more than ever is a revival of professionalism within our ranks, and it’s time for us to rise up as a community to strive towards that goal. Seriously - if you are just doing your IT job and are perfectly satisfied with your status in our industry, please don’t even bother downloading this presentation. (Just kidding!)
Politics Ain’t Beanbag: Using APEX, ML, and GeoCoding In a Modern Election Ca...Jim Czuprynski
Oracle announced in December 2019 its Spatial and Graph features are now included without additional licensing costs for Oracle databases. This means application developers now have low-cost access to powerful geolocation, routing, and mapping capabilities – a welcome addition for any Application Express (APEX) application that previously shied away from implementing those features. I'll demonstrate a real-life use case – handling the changing demands of a modern election campaign, including managing widely-dispersed volunteers and voters, using geolocation for merchandise distribution, and identifying “flippable” voters with ML and analytics – through a mobile-capable APEX application.
One Less Thing For DBAs to Worry About: Automatic IndexingJim Czuprynski
You’re a busy Oracle DBA. Your phone rings. It’s your most troublesome user, once again complaining that her query is running slow. You take a quick look at the execution plan, find a possible choice for a new index to improve its performance, and drop it in place: Problem solved. Or is it? Even an experienced DBA may not immediately realize the impact that new index will have on the performance of dozens of other queries and DML statements.
Finally, there’s a better way: Let the database decide.
I'll show you how Automatic Indexing (AI) - one of the newest features of Oracle Database 19c – provides an intriguing alternative to reactive performance tuning methodologies for index creation. We’ll look at how AIC reacts to a heavy hybrid application workload and then holistically builds, tests, and implements the most appropriate secondary indexes needed to improve database performance.
Keep Your Code Low, Low, Low, Low, Low: Getting to Digitally Driven With Orac...Jim Czuprynski
In the brave new digitally-driven world, IT organizations can no longer focus on internal-only RDBMS databases as the central pillar of their infrastructure; data must be accessed externally as well, regardless of format or location, with utmost security. Fortunately, Oracle’s Converged Database strategy makes it simple to satisfy these demands. This presentation explores the myriad facets of a Converged Database strategy and what it means for your career’s future path, regardless of whether you’re an application developer or DBA.
Cluster, Classify, Associate, Regress: Satisfy Your Inner Data Scientist with...Jim Czuprynski
The modern data scientist has a daunting task: Probing petabytes of data, figuring out which Machine Learning (ML) algorithms to apply to filter the grain from the chaff, and producing meaningful intelligence on which to base digitally-driven strategies for their organization. This presentation demonstrates how even a fledgling citizen data scientist facing new real-life opportunities to discover unseen patterns and relationships within sample data can quickly leverage the powerful ML capabilities already built into the Oracle Database and available for use at no additional cost.
Where the %$#^ Is Everybody? Geospatial Solutions For Oracle APEXJim Czuprynski
Geospatial use cases are common – closest coffeeshop, efficient delivery routing, other stores near me – and I’ll show you how to use Oracle’s Spatial and Graph feature set to tackle them within a simple-to-build APEX application.
Fast and Furious: Handling Edge Computing Data With Oracle 19c Fast Ingest an...Jim Czuprynski
The Internet of Things (IoT) has deep use cases - energy grids, communications, policing, security, and manufacturing. I’ll show how to use Oracle 19c’s Fast Ingest and Fast Lookup features to load IoT data from “edge” sources to take immediate advantage of that information in nearly real time.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
2. My Credentials
• 40 years of database-centric IT experience
• Oracle DBA since 2001
• Oracle 9i, 10g, 11g, 12c OCP and ADWC
• Oracle ACE Director since 2014
• ODTUG Database Committee Lead
• Editor of ODTUG TechCeleration
• Oracle-centric blog (Generally, It Depends)
• Regular speaker at Oracle OpenWorld, COLLABORATE,
KSCOPE, and international and regional OUGs
E-mail me at jczuprynski@zerodefectcomputing.com
Follow me on Twitter (@JimTheWhyGuy)
Connect with me on LinkedIn (Jim Czuprynski)
3. Our Agenda
•Autonomous Transaction Processing (ATP)
•Creating, Controlling, and Monitoring an ATP Instance
•Loading Data Into ATP
•Monitoring ATP Performance in Multiple Dimensions
•Demo: How ATP Reacts to Overwhelming Workloads
•Conclusions and References
4. Moving to Autonomous DB: A Suggested Business Process Flow
Assess
• Is my
application
workload really
ready to move
to ATP?
Plan
•What migration
strategy is most
appropriate?
•How long of an
outage can my
production
application afford?
Migrate
• Transfer data
using chosen
migration
strategy, and
keep it
synchronized
Monitor
• Watch for any
unexpected
service outage /
performance
degradation /
user complaints
Tweak
• Should any
application
workloads shift to
a different ATP
instance service?
As an evolving Oracle Enterprise Data Architect, it’s crucial to recognize
and embrace the main thrust of Autonomous DB:
No More Knobs!
6. ATP: Creating a New Instance (1)
Specify your cloud account …
1
… and get logged in
2
Access your Cloud Dashboard, then choose what kind of
instance to create
3 Build a new compartment for your ATP instance …
5
… and check out the other compartments available
5
7. ATP: Creating a New Instance (2)
Specify a compartment and
administrator credentials …
1
… and ATP instance creation begins!
2
ATP instance now shows up in chosen compartment …
3
… and your first ADW instance is now ready to access
4
8. ATP: Creating a New Instance (3)
Connect to the new instance
using the ADMIN account …
1
Here’s your first look at the ATP Service Console!
2
Request new credentials for access …
3
… supply a robust password …
4
… and save the
new credentials in
TNSNAMES home
5
10. Examples of Automatically Provided ATP Database Services
Service Name Usage Parallelism?
Resource
Management Plan
Shares
Concurrency Usage Recommendations
PDBSOE_TPURGENT OLTP Manual 12 Unlimited
Highest priority service aimed at time-
critical OLTP operations
PDBSOE_TP
OLTP 1 8
Unlimited Use for typical (non-time-critical) OLTP
operations
PDBSOE_HIGH Queries CPU_COUNT 4 3 Queries
When the system is under resource
pressure, these sessions will get highest
priority
PDBSOE_MEDIUM Queries 4 2
1.25 x
CPU_COUNT
queries
When the system is under resource
pressure, these sessions receive medium
priority
PDBSOE_LOW Queries 1 1
2 x CPU_COUNT
queries
When the system is under resource
pressure, these sessions receive lowest
priority
See the detailed documentation for complete information on how these database services work.
11. ATP: Migrating and Loading Data
11
Am I Empowered To … 18c ATP
Load data with SQL*Loader or SQLDeveloper? Yes
Yes, but source files should
reside “nearby” by network
Load data with Data Pump Import? Yes
Yes, but export dump set
resides in object storage
Export data with Data Pump Export? Yes
Yes, but export dump set
resides in object storage
Synchronize data with GoldenGate*? Yes Yes, within certain limits
*See this documentation for complete information on GoldenGate capabilities
for Autonomous Databases.
13. ATP: Monitoring Performance
in Multiple Dimensions
• Generating a Simple Sample Workload
• Monitoring Performance With the ATP Service Console
• Monitoring Performance with MonitorDB Utility
• Demonstration: Generating a “Nightmare” Workload
14. ATP: Monitoring Instance and Statement Performance
How is the ATP instance performing right now, and are there any evident “pushbacks” against a running workload?
1 Performance can also be viewed for a particular narrower time period
2
Viewing the performance of running as well as completed individual statements
3
Viewing an individual SQL statement’s performance …
4 … the statement’s execution plan …
5
… and how much parallelism is being consumed
6
15. ATP: Turning the “Big Red Dial”
Requesting CPU scale-up …
1 Scale-up in progress …
2 … and successful CPU scale-up completed
3
Workload Exhaustion Demonstration:
Five different workloads simultaneously executed against SOE schema
After scale-up, TPURGENT
performance improves …
… the number of executing
statements increases …
… and there’s a decrease
in queued statements
17. 18c vs. ATP: Comparison of Features
18
Am I Empowered To … 18c ATP
Add my own schemas? Yes Yes
Connect applications directly via TNSNAMES? Yes Yes
Elastically upsize or downsize CPUs, memory, and storage? Yes Yes
Create my own CDBs and PDBs? Yes No
Clone a PDB to the same or another CDB? Yes No
Build my own tablespaces? Yes No
Modify memory pool sizes (e.g. SGA_SIZE)? Yes No
Modify security settings (e.g. keystores)? Yes No
Connect directly as SYS? Yes No
Build a PDB using RMAN backups? Yes No
Connect with Enterprise Manager Cloud Control for monitoring?
Via Proxy
Agent
No
18. ATP: Loading Data Via DBMS_CLOUD.COPY_DATA (1)
Creating credentials for accessing file system:
1
SQL> BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => 'extb_tpcds'
,username => 'IOUGCloudTrial@ioug.org'
,password => '(;n<T1#-MpY>4u>_yilK'
);
END;
/
Creating the new table:
2
SQL> CREATE TABLE tpcds.customer_credit_ratings (
ccr_customer_number NUMBER(7)
,ccr_last_reported DATE
,ccr_credit_rating NUMBER(5)
,ccr_missed_payments NUMBER(3)
,ccr_credit_maximum NUMBER(7)
)
STORAGE (INITIAL 8M NEXT 4M)
PARTITION BY RANGE (ccr_last_reported)
INTERVAL(NUMTOYMINTERVAL(3, 'MONTH'))
(PARTITION ccr_oldest
VALUES LESS THAN (TO_DATE('1998-04-01', 'yyyy-mm-dd'))
);
Table created.
Loading data with DBMS_CLOUD.COPY_DATA:
3
SQL> BEGIN
DBMS_CLOUD.COPY_DATA(
table_name => 'CUSTOMER_CREDIT_RATINGS'
,credential_name => 'EXTB_TPCDS'
,file_uri_list =>
'https://swiftobjectstorage.us-ashburn-1.oraclecloud.com/v1/iougcloudtrial/
ADWExternalTables/CreditScoring_Current.dat'
,schema_name => 'TPCDS'
,field_list => 'ccr_customer_number CHAR(08),ccr_last_reported CHAR(10)
,ccr_credit_rating CHAR(05),ccr_missed_payments CHAR(03)
,ccr_credit_maximum CHAR(07)’
,format => '{"delimiter" : "|" , "dateformat" : "YYYY-MM-DD"}');
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE('ERROR:' || SQLCODE || ' ' || SQLERRM);
END;
/
PL/SQL procedure successfully completed.
19. ATP: Loading Data Via DBMS_CLOUD. COPY_DATA (2)
Monitoring a running load task …
1 … even when it fails to complete successfully!
2
Show status of running
load operations:
3
SET LINESIZE 132
SET PAGESIZE 20000
COL owner_name FORMAT A08 HEADING "Owner"
COL table_name FORMAT A24 HEADING "Table|Loaded"
COL type FORMAT A08 HEADING "Operation"
COL status FORMAT A10 HEADING "Status"
COL start_dtm FORMAT A19 HEADING "Started At"
COL update_dtm FORMAT A19 HEADING "Finished At"
COL logfile_table FORMAT A12 HEADING "LOGFILE|Table"
COL badfile_table FORMAT A12 HEADING "BADFILE|Table"
SELECT
owner_name
,table_name
,type
,status
,TO_CHAR(start_time,'YYYY-MM-DD HH24:MI:SS') start_dtm
,TO_CHAR(update_time,'YYYY-MM-DD HH24:MI:SS') update_dtm
,logfile_table
,badfile_table
FROM user_load_operations
WHERE type = 'COPY'
ORDER BY start_time DESC;
Table LOGFILE BADFILE
Owner Loaded Operatio Status Started At Finished At Table Table
-------- ------------------------ -------- ---------- ------------------- ------------------- ------------ ------------
TPCDS CUSTOMER_CREDIT_RATINGS COPY COMPLETED 2018-10-08 11:00:59 2018-10-08 11:03:12 COPY$38_LOG COPY$38_BAD
TPCDS CUSTOMER_CREDIT_RATINGS COPY FAILED 2018-10-08 10:51:09 2018-10-08 10:53:16 COPY$37_LOG COPY$37_BAD
TPCDS CUSTOMER_CREDIT_RATINGS COPY FAILED 2018-10-08 10:50:49 2018-10-08 10:50:49
TPCDS CUSTOMER_CREDIT_RATINGS COPY FAILED 2018-10-08 10:50:03 2018-10-08 10:50:03
TPCDS CUSTOMER_CREDIT_RATINGS COPY FAILED 2018-10-08 10:34:33 2018-10-08 10:35:56 COPY$34_LOG COPY$34_BAD
Show the
resulting
LOG File:
4
SQL> SELECT *
FROM copy$38_log;
LOG file opened at 10/08/18 16:03:01
Total Number of Files=1
Data File: https://swiftobjectstorage.us-ashburn-1.oraclecloud.com/v1/iougcloudt
Log File: COPY$38_144722.log
LOG file opened at 10/08/18 16:03:01
Bad File: COPY$38_355882.bad
Field Definitions for table COPY$WQPDD1Q3X2892USR6RY7
Record format DELIMITED BY
Data in file has same endianness as the platform
Rows with all null fields are accepted
Fields in Data Source:
CCR_CUSTOMER_NUMBER CHAR (8)
Terminated by "|"
CCR_LAST_REPORTED CHAR (10)
Date datatype DATE, date mask YYYY-MM-DD
Terminated by "|"
CCR_CREDIT_RATING CHAR (5)
Terminated by "|"
CCR_MISSED_PAYMENTS CHAR (3)
Terminated by "|"
CCR_CREDIT_MAXIMUM CHAR (7)
Terminated by "|"
Date Cache Statistics for table COPY$WQPDD1Q3X2892USR6RY7
Date conversion cache disabled due to overflow (default size: 1000)
20. ATP: Migrating Data Via DataPump Export and Import
Export data from source database:
1
$> expdp vevo/vevo@pdbvevo parfile=ADW_VEVO.expdp
#####
# File: ADW_VEVO.expdp
# Purpose: DataPump Export parameter file for VEVO schema
# 1.) Exclude all:
# - Clusters
# - Database Links
# - Indexes and Index Types
# - Materialized Views, Logs, and Zone Maps
# 2.) For partitioned tables, unload all table data in a single operation (rather
# than unload each table partition as a separate operation) for faster loading
# 3.) Use 4 degrees of parallelism and write to multiple dump files
#####
DIRECTORY=DATA_PUMP_DIR
EXCLUDE=INDEX, CLUSTER, INDEXTYPE, MATERIALIZED_VIEW, MATERIALIZED_VIEW_LOG,
MATERIALIZED_ZONEMAP, DB_LINK
DATA_OPTIONS=GROUP_PARTITION_TABLE_DATA
PARALLEL=4
SCHEMAS=vevo
DUMPFILE=export%u.dmp
Export: Release 18.0.0.0.0 - Production on Sat Sep 1 19:12:37 2018
Version 18.1.0.0.0
Copyright (c) 1982, 2018, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 18c EE Extreme Perf Release 18.0.0.0.0 - Production
Starting "VEVO"."SYS_EXPORT_SCHEMA_01": vevo/********@pdbvevo parfile=ADW_VEVO.expdp
Processing object type SCHEMA_EXPORT/PACKAGE/PACKAGE_BODY
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . .
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
. . exported "VEVO"."T_CANVASSING" 27.77 MB 539821 rows
. . exported "VEVO"."T_STAFF" 13.86 KB 26 rows
. . exported "VEVO"."T_CAMPAIGN_ORG" 8.031 KB 25 rows
. . exported "VEVO"."T_VOTING_RESULTS" 80.90 MB 1887761 rows
. . exported "VEVO"."T_VOTERS" 84.84 MB 180000 rows
ORA-39173: Encrypted data has been stored unencrypted in dump file set.
Master table "VEVO"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for VEVO.SYS_EXPORT_SCHEMA_01 is:
/u01/app/oracle/admin/ORCL/dpdump/6F4634904CBF29F3E0535AEA110A9CAE/export01.dmp
/u01/app/oracle/admin/ORCL/dpdump/6F4634904CBF29F3E0535AEA110A9CAE/export02.dmp
/u01/app/oracle/admin/ORCL/dpdump/6F4634904CBF29F3E0535AEA110A9CAE/export03.dmp
/u01/app/oracle/admin/ORCL/dpdump/6F4634904CBF29F3E0535AEA110A9CAE/export04.dmp
Job "VEVO"."SYS_EXPORT_SCHEMA_01" successfully completed at Sat Sep 1 19:13:17 2018 elapsed 0 00:00:39
Transfer export dump set to Object Container
2
Set up credentials for access:
3
SQL> CONNECT admin/'N0M0reKn0bs#@tpcds_high;
SQL> BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => ‘DPI_VEVO’
,username => 'jczuprynski@zerodefectcomputing.com'
,password => 'N0M0reKn0bs#'
);
END;
/
SQL> ALTER DATABASE PROPERTY
SET default_credential = 'ADMIN.DPI_VEVO';
Add new schema into ADW Instance:
4
SQL> CONNECT admin/'N0M0reKn0bs#@tpcds_high;
CREATE USER vevo
IDENTIFIED BY N0M0reKn0bs#
TEMPORARY TABLESPACE temp
PROFILE DEFAULT;
GRANT RESOURCE TO vevo;
GRANT CREATE PROCEDURE TO vevo;
GRANT CREATE PUBLIC SYNONYM TO vevo;
GRANT CREATE SEQUENCE TO vevo;
GRANT CREATE SESSION TO vevo;
GRANT CREATE SYNONYM TO vevo;
GRANT CREATE TABLE TO vevo;
GRANT CREATE VIEW TO vevo;
GRANT DROP PUBLIC SYNONYM TO vevo;
GRANT EXECUTE ANY PROCEDURE TO vevo;
GRANT READ,WRITE ON DIRECTORY data_pump_dir TO vevo;
Import data into ADW instance:
5
$> ./impdp admin/IOUG1sAwesome@TPCDS_HIGH
DIRECTORY=DATA_PUMP_DIR
VERSION=18.0.0
REMAP_SCHEMA=vevo:vevo
DUMPFILE=default_credential:https://swiftobjectstorage.us-ashburn-
1.oraclecloud.com/v1/iougcloudtrial/DP_VEVO/export%U.dmp
PARALLEL=4
PARTITION_OPTIONS=MERGE
TRANSFORM=SEGMENT_ATTRIBUTES:N
TRANSFORM=DWCS_CVT_IOTS:Y
TRANSFORM=CONSTRAINT_USE_DEFAULT_INDEX:Y
EXCLUDE=index,cluster,indextype,materialized_view,materialized_view_log
,materialized_zonemap,db_link
Import: Release 12.2.0.1.0 - Production on Sun Sep 2 21:51:32 2018
Copyright (c) 1982, 2017, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 18c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
Master table "ADMIN"."SYS_IMPORT_FULL_02" successfully loaded/unloaded
Starting "ADMIN"."SYS_IMPORT_FULL_02": admin/********@TPCDS_HIGH DIRECTORY=DATA_PUMP_DIR VERSION=18.0.0
. . .
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "VEVO"."T_STAFF" 13.86 KB 26 rows
. . imported "VEVO"."T_CANVASSING" 27.77 MB 539821 rows
. . imported "VEVO"."T_CAMPAIGN_ORG" 8.031 KB 25 rows
. . imported "VEVO"."T_VOTING_RESULTS" 80.90 MB 1887761 rows
. . imported "VEVO"."T_VOTERS" 84.84 MB 180000 rows
Processing object type SCHEMA_EXPORT/PACKAGE/PACKAGE_SPEC
Processing object type SCHEMA_EXPORT/PACKAGE/COMPILE_PACKAGE/PACKAGE_SPEC/ALTER_PACKAGE_SPEC
Processing object type SCHEMA_EXPORT/PACKAGE/PACKAGE_BODY
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
Job "ADMIN"."SYS_IMPORT_FULL_02" successfully completed at Mon Sep 3 02:52:42 2018 elapsed 0 00:01:06
21. ATP: Advantages of “No More Knobs“
Remember, ATP (like ADW) is all about no more knobs … and that’s really advantageous!
• Service instance can be stopped and restarted as necessary
• Useful for conserving Cloud credits
• Easy to connect to
• Only a few entries in SQLNET.ORA file and TNSNAMES.ORA are required
• Regular RMAN backups taken automatically on regular nightly schedule
• No instance tuning required
• Memory pool sizes are already locked in
• Parallelism is automatically derived depending on number of OCPUs and service name selected for connection
• Only appropriate licensing options are included
• No worries about accidently incurring potential additional licensing fees
• Direct-path loads are fully supported
• DataPump Export and Import provides for rapid provisioning from existing databases
• GoldenGate support has been added as well
22. ATP: Summary of Appropriate Use Cases
ATP is most appropriate for the following application workload requirements and environments:
• Mixed workloads, including OLTP and moderate reporting
• Exadata storage software caches most frequently used database blocks in flash memory on storage cells
• Up to 128 OCPUs and 128 TB of storage can be requested per ATP instance (subject to availability within instance’s region)
• Ideally, OLTP application workload(s) should already be well-tuned to avoid surprises
• Virtually no DBA resources required for database management
• No instance tuning is necessary
• Selection of appropriate database service for the workload is really the only choice required
• Parallelism derived from database service selected and number of OCPUs available
• Scale-up and scale-down requires just a single button push
• Database migration and transformation only limited by desired / appropriate transferal methods
• Fresh load: DBMS_CLOUD.COPY_DATA, SQL*Loader, or INSERT INTO … SELECT FROM an EXTERNAL Table
• Existing schema(s): DataPump Import
• Tight synchronization required: GoldenGate
• Extremely large data transfers possible via Oracle Cloud Infrastructure Data Transfer Appliance
24. ATP: Database Feature Limitations
Several database features normally available for an OCI-resident Oracle
database are restricted for ATP instances:
Object / Permission / Feature Restrictions
Tablespaces Cannot be added, removed, or modified
Parallelism Enabled by default and based on number of OCPUs and chosen database service for
application to connect
Compression HCC compression is not enabled by default, but a compression clause will be honored
Result Caching Enabled by default for all statements; cannot be changed
Node File System and OS No direct access permitted
Database Links to Other Databases Prohibited to preserve security features
PL/SQL Calls Using DB Links Likewise, prohibited
Parallel DML Enabled by default, but can be disabled at session level:
ALTER SESSION DISABLE PARALLEL DML;
See Restrictions for Database Features for a complete list of these ATP limitations.
25. ATP: Permitted Changes to Initialization Parameters
Only the following database initialization parameters may be modified:
Initialization Parameters That Can Be Modified
APPROX_FOR_AGGREGATION OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES*
APPROX_FOR_COUNT_DISTINCT OPTIMIZER_IGNORE_HINTS
APPROX_FOR_PERCENTILE OPTIMIZER_IGNORE_PARALLEL_HINTS
AWR_PDB_AUTOFLUSH_ENABLED PLSCOPE_SETTINGS
PLSQL_DEBUG
PLSQL_WARNINGS
PLSQL_OPTIMIZE_LEVEL
PLSQL_CCFLAGS
Most NLS parameters TIME_ZONE*
* Only via ALTER SESSION
See Restrictions to Database Initialization Parameters for more information on permissible changes.
26. ATP: Unavailable Options and Packs
The following database options and packs are not enabled for ATP instances:
Non-Usable Database Options and Packs
Oracle Application Express Oracle Spatial and Graph
Oracle Tuning Pack Oracle Data Masking and Subsetting Pack
Oracle Real Application Testing Oracle R capabilities of Oracle Advanced Analytics
Oracle Database Vault Oracle Industry Data Models
Oracle Data Masking and Subsetting Pack Oracle Text
Oracle Database Lifecycle Management Pack Oracle Multimedia
Oracle Cloud Management Pack for Oracle
Database
Java in DB
Oracle OLAP Oracle XML DB
Oracle Workspace Manager Context
See Restrictions for Database Features for complete information on unusable database options and packs.
27. ATP: Unavailable SQL Commands
The following SQL commands cannot be executed against an ATP instance:
See Restrictions for SQL Commands for complete information on these unavailable SQL commands.
SQL Command Reason for Unavailability
ADMINISTER KEY
MANAGEMENT
PDB-level security tightly enforced
CREATE / ALTER / DROP
TABLESPACE
Tablespaces are strictly controlled
ALTER PROFILE Resource limits and security restraints tightly enforced
CREATE DATABASE LINK Self-containment and security
CREATE INDEX [BITMAP] BITMAP indexes are not permitted
28. Useful Resources and Documentation
• ATP Documentation:
https://docs.oracle.com/en/cloud/paas/atp-cloud/index.html
• Dominic Giles’s Blog on Setting Up SwingBench for ATP:
http://www.dominicgiles.com/blog/files/c84a63640d52961fc28f750570888cdc-169.html
• Oracle Autonomous and Secure Cloud Services Blog:
https://blogs.oracle.com/autonomous-and-secure-cloud-services
• Maria Colgan on What to Expect From ATP Cloud:
https://blogs.oracle.com/what-to-expect-from-oracle-autonomous-transaction-processing
https://blogs.oracle.com/what-to-expect-from-oracle-autonomous-transaction-processing-cloud
Editor's Notes
The basic characteristics of these consumer groups are:
TPURGENT
The highest priority application connection service for time critical transaction processing operations.
This connection service supports manual parallelism.
TP
This is the typical application connection service for transaction processing operations.
This connection service does not run with parallelism.
HIGH
Sessions connected to the High database service get the highest priority when the system is under resource pressure.
Queries run serially unless you specify a parallel degree through a session parameter, using a statement hint, or by specifying a parallel degree on the underlying tables.
MEDIUM
Sessions connected to the Medium database service get medium priority when the system is under resource pressure.
Queries run serially unless you specify a parallel degree through a session parameter, using a statement hint, or by specifying a parallel degree on the underlying tables. Using the MEDIUM service the degree of parallelism is limited to four (4).
LOW
Sessions connected to the Low database service get the lowest priority when the system is under resource pressure.
Queries run serially unless you specify a parallel degree through a session parameter, using a statement hint, or by specifying a parallel degree on the underlying tables.