This document discusses JSON storage and querying in Oracle databases. It covers:
- Configuring the database to support JSON with required patches
- Storing JSON in BLOB columns for space savings and avoiding character set conversions
- Indexing approaches for JSON including indexes on specific fields
- Performance of ingesting, retrieving, and searching JSON data, which can be improved through techniques like anchoring queries and using in-memory indexes
- Maintenance best practices like checking JSON constraints and rebuilding indexes periodically
Oracle JSON treatment evolution - from 12.1 to 18 AOUG-2018Alexander Tokarev
The presentation was prepared for Austria Oracle User group 30 years. It tells us a lot of challenges which Oracle developers face with implementing high-load json processing pipelines.
The presentation is an advanced version about Oracle Result cache feature. It is rewritten presentation from HIGLOAD-2017. A lot of result cache internals under the cover.
The presentation describes what is Apache Solr, how it could be used. There is apache solr overview, performance tuning tips and advanced features description
Изначально будут раскрыты базовые причины, которые заставили появиться такой части механизма СУБД, как кэш результатов, и почему в ряде СУБД он есть или отсутствует.
Будут рассмотрены различные варианты кэширования результатов как sql-запросов, так и результатов хранимой в БД бизнес-логики. Произведено сравнение способов кэширования (программируемые вручную кэши, стандартный функционал) и даны рекомендации, когда и в каких случаях данные способы оптимальны, а порой опасны.
Для каждой из рекомендаций будут продемонстрированы как положительные так и отрицательные кейсы из опыта production-эксплуатации реальных систем, где используются разные варианты кэшей
the presentation describes a lot of very technical details about row level security, possibble security breaches in relational databases like Oracle and Postgres. A lot of examples how to protect data is shown.
The presentation describes various options for implementing row-level security in enterprise applications: database side, application server side, mixed approaches. we consider oracle virtual private database, different encription options and possible security breaches and their mitigation path.
Oracle JSON treatment evolution - from 12.1 to 18 AOUG-2018Alexander Tokarev
The presentation was prepared for Austria Oracle User group 30 years. It tells us a lot of challenges which Oracle developers face with implementing high-load json processing pipelines.
The presentation is an advanced version about Oracle Result cache feature. It is rewritten presentation from HIGLOAD-2017. A lot of result cache internals under the cover.
The presentation describes what is Apache Solr, how it could be used. There is apache solr overview, performance tuning tips and advanced features description
Изначально будут раскрыты базовые причины, которые заставили появиться такой части механизма СУБД, как кэш результатов, и почему в ряде СУБД он есть или отсутствует.
Будут рассмотрены различные варианты кэширования результатов как sql-запросов, так и результатов хранимой в БД бизнес-логики. Произведено сравнение способов кэширования (программируемые вручную кэши, стандартный функционал) и даны рекомендации, когда и в каких случаях данные способы оптимальны, а порой опасны.
Для каждой из рекомендаций будут продемонстрированы как положительные так и отрицательные кейсы из опыта production-эксплуатации реальных систем, где используются разные варианты кэшей
the presentation describes a lot of very technical details about row level security, possibble security breaches in relational databases like Oracle and Postgres. A lot of examples how to protect data is shown.
The presentation describes various options for implementing row-level security in enterprise applications: database side, application server side, mixed approaches. we consider oracle virtual private database, different encription options and possible security breaches and their mitigation path.
Longhorn PHP - MySQL Indexes, Histograms, Locking Options, and Other Ways to ...Dave Stokes
Slow query? Add an index or two! But things are suddenly even slower! Indexes are great tools to speed data lookup but have overhead issues. Histograms don’t have that overhead but may not be suited. And how you lock rows also effects performance. So what do you do to speed up queries smartly? This is a clear, detailed look at what you can do to really speed up your queries in a logical, orthogonal process. Updated Presentation for Longhorn PHP Conference October 14th, 2021
Dutch PHP Conference 2021 - MySQL Indexes and HistogramsDave Stokes
Slides from the 2021 Dutch PHP Conference on MySQL Indexes, histograms, and other things to speed up your database queries. Speeding up your database queries is mainly learning how to efficiently give the query optimizer what is needs to provide the best query plan for your data.
This presentation is an INTRODUCTION to intermediate MySQL query optimization for the Audience of PHP World 2017. It covers some of the more intricate features in a cursory overview.
Confoo 2021 - MySQL Indexes & HistogramsDave Stokes
Confoo 2021 presentation on MySQL Indexes, Histograms, and other ways to speed up your queries. This slide deck has slides that may not have been included in the presentation that were omitted due to time constraints
The presentation in Oracle Technical Carnival China 2016, this is the second presentation about Oracle sharding function that will release in 12.2. In this presentation, described in real case how Oracle construct the sharding table and duplicated table.
PHP UK 2020 Tutorial: MySQL Indexes, Histograms And other ways To Speed Up Yo...Dave Stokes
Slow query? Add an index or two! But things are suddenly even slower! Indexes are great tools to speed data lookup but have overhead issues. Histograms don’t have that overhead but may not be suited. And how you lock rows also effects performance. So what do you do to speed up queries smartly?
Presented by Rafal Kuć, Consultant and Software engineer, , Sematext Group, Inc.
Even though Solr can run without causing any troubles for long periods of time it is very important to monitor and understand what is happening in your cluster. In this session you will learn how to use various tools to monitor how Solr is behaving at a high level, but also on Lucene, JVM, and operating system level. You'll see how to react to what you see and how to make changes to configuration, index structure and shards layout using Solr API. We will also discuss different performance metrics to which you ought to pay extra attention. Finally, you'll learn what to do when things go awry - we will share a few examples of troubleshooting and then dissect what was wrong and what had to be done to make things work again.
Design and develop with performance in mind
Establish a tuning environment
Index wisely
Reduce parsing
Take advantage of Cost Based Optimizer
Avoid accidental table scans
Optimize necessary table scans
Optimize joins
Use array processing
Consider PL/SQL for “tricky” SQL
Longhorn PHP - MySQL Indexes, Histograms, Locking Options, and Other Ways to ...Dave Stokes
Slow query? Add an index or two! But things are suddenly even slower! Indexes are great tools to speed data lookup but have overhead issues. Histograms don’t have that overhead but may not be suited. And how you lock rows also effects performance. So what do you do to speed up queries smartly? This is a clear, detailed look at what you can do to really speed up your queries in a logical, orthogonal process. Updated Presentation for Longhorn PHP Conference October 14th, 2021
Dutch PHP Conference 2021 - MySQL Indexes and HistogramsDave Stokes
Slides from the 2021 Dutch PHP Conference on MySQL Indexes, histograms, and other things to speed up your database queries. Speeding up your database queries is mainly learning how to efficiently give the query optimizer what is needs to provide the best query plan for your data.
This presentation is an INTRODUCTION to intermediate MySQL query optimization for the Audience of PHP World 2017. It covers some of the more intricate features in a cursory overview.
Confoo 2021 - MySQL Indexes & HistogramsDave Stokes
Confoo 2021 presentation on MySQL Indexes, Histograms, and other ways to speed up your queries. This slide deck has slides that may not have been included in the presentation that were omitted due to time constraints
The presentation in Oracle Technical Carnival China 2016, this is the second presentation about Oracle sharding function that will release in 12.2. In this presentation, described in real case how Oracle construct the sharding table and duplicated table.
PHP UK 2020 Tutorial: MySQL Indexes, Histograms And other ways To Speed Up Yo...Dave Stokes
Slow query? Add an index or two! But things are suddenly even slower! Indexes are great tools to speed data lookup but have overhead issues. Histograms don’t have that overhead but may not be suited. And how you lock rows also effects performance. So what do you do to speed up queries smartly?
Presented by Rafal Kuć, Consultant and Software engineer, , Sematext Group, Inc.
Even though Solr can run without causing any troubles for long periods of time it is very important to monitor and understand what is happening in your cluster. In this session you will learn how to use various tools to monitor how Solr is behaving at a high level, but also on Lucene, JVM, and operating system level. You'll see how to react to what you see and how to make changes to configuration, index structure and shards layout using Solr API. We will also discuss different performance metrics to which you ought to pay extra attention. Finally, you'll learn what to do when things go awry - we will share a few examples of troubleshooting and then dissect what was wrong and what had to be done to make things work again.
Design and develop with performance in mind
Establish a tuning environment
Index wisely
Reduce parsing
Take advantage of Cost Based Optimizer
Avoid accidental table scans
Optimize necessary table scans
Optimize joins
Use array processing
Consider PL/SQL for “tricky” SQL
12cR1 new features. I have tried to cover all new features of 12cR1 and many more may be missing. These are all my own views and do not necessarily reflect the views of Oracle. Requesting all visitors to comment on it to improve further.
The Challenges of Distributing Postgres: A Citus StoryHanna Kelman
Set theory forms the basis for relational algebra and relational databases, and SQL is the lingua franca of modern RDBMS’s. Even with all the attention given to NoSQL in recent years, the lion share of database usage remains relational. But until recently, nearly all relational database solutions have been limited to the resources of a single node. Not anymore.
This talk is about my team’s journey tackling the challenges of distributing SQL. Specifically in the context of my favorite (open source) database: Postgres. I believe that too many developers spend too much time worrying about scaling their databases. So at Citus Data, we created an extension to Postgres that enables developers to scale out compute, memory, and storage by distributing queries across a cluster of nodes.
This talk describes the distributed systems challenges we faced at Citus in scaling out Postgres—and how we addressed them. I’ll talk about how we use PostgreSQL’s extension APIs to parallelize queries in a distributed cluster. I’ll cover the architecture of a distributed query planner and specifically how the join order planner has to choose between broadcast, co-located, and repartition joins in order to minimize network I/O. And if there’s time, I’ll walk through the dynamic executor logic that we built. The end result: a distributed database and a lot less time spent worrying about scale.
The Challenges of Distributing Postgres: A Citus Story | DataEngConf NYC 2017...Citus Data
Set theory forms the basis for relational algebra and relational databases, and SQL is the lingua franca of modern RDBMS’s. Even with all the attention given to NoSQL in recent years, the lion share of database usage remains relational. But until recently, nearly all relational database solutions have been limited to the resources of a single node. Not anymore.
This talk is about my team’s journey tackling the challenges of distributing SQL. Specifically in the context of my favorite (open source) database: Postgres. I believe that too many developers spend too much time worrying about scaling their databases. So at Citus Data, we created an extension to Postgres that enables developers to scale out compute, memory, and storage by distributing queries across a cluster of nodes.
This talk describes the distributed systems challenges we faced at Citus in scaling out Postgres—and how we addressed them. I’ll talk about how we use PostgreSQL’s extension APIs to parallelize queries in a distributed cluster. I’ll cover the architecture of a distributed query planner and specifically how the join order planner has to choose between broadcast, co-located, and repartition joins in order to minimize network I/O. And if there’s time, I’ll walk through the dynamic executor logic that we built. The end result: a distributed database and a lot less time spent worrying about scale.
Hello Everyone ! Hope everybody doing good in their work and with their busy life.
Today i am listing down some interesting ORA- errors which i found recently as a Beginner, My Good Luck i have solved those too. So, here i am Listing down the errors with solutions.
It happens when you work with oracle, you may face or might be facing.
So, guys ! Be fearless. Have a look over it. If you need any help, Please Please let me know..
Thankyou.
The presentation explains how to setup rate limits, how to work with 429 code, how rate limits are implemented in kubernetes, cni, loadbalancer and so on
the presentation is about federated GraphQL in huge enterprises. I explain why and what for big enterprises need distributed GraphQL and classic one does not work.
The majority of cloud-based DWH provides a wide range of migration tools from in-house DWH. However, I believe that cloud migration success is based not only on reducing infrastructure maintenance costs, but also on additional performance profit inherited from tailored data model.
I am going to prove that copying star or snowflake schemas as is will not lead to maximum performance boost in such DWH as Amazon Redshift and Google BigQuery. Moreover, this approach may cause additional cloud expenses.
We will discuss why data models should be different for each particular database, and how to get maximum performance from database peculiarities.
Most of performance tuning techniques for cloud-based DWH are about adding extra nodes to cluster, but it may lead to performance degradation in some cases, as well as extra costs burden. Sometimes, this approach allows to get maximum speed from current hardware configuration, may be even less expensive servers.
I will show some examples from production projects with extra performance using lower hardware, and edge cases like huge wide fact table with fully denormalized dimensions instead of classical star schema.
The presentation describes how to design robust solution for tagging search, how to use tagging for faceted search. Various architecture and data patterns are considered. We discuss relational databases like Oracle, full text search servers like Apache Solr. We will see how Oracle 18c features permit to use embedded faceted search.
E-commerce Application Development Company.pdfHornet Dynamics
Your business can reach new heights with our assistance as we design solutions that are specifically appropriate for your goals and vision. Our eCommerce application solutions can digitally coordinate all retail operations processes to meet the demands of the marketplace while maintaining business continuity.
AI Genie Review: World’s First Open AI WordPress Website CreatorGoogle
AI Genie Review: World’s First Open AI WordPress Website Creator
👉👉 Click Here To Get More Info 👇👇
https://sumonreview.com/ai-genie-review
AI Genie Review: Key Features
✅Creates Limitless Real-Time Unique Content, auto-publishing Posts, Pages & Images directly from Chat GPT & Open AI on WordPress in any Niche
✅First & Only Google Bard Approved Software That Publishes 100% Original, SEO Friendly Content using Open AI
✅Publish Automated Posts and Pages using AI Genie directly on Your website
✅50 DFY Websites Included Without Adding Any Images, Content Or Doing Anything Yourself
✅Integrated Chat GPT Bot gives Instant Answers on Your Website to Visitors
✅Just Enter the title, and your Content for Pages and Posts will be ready on your website
✅Automatically insert visually appealing images into posts based on keywords and titles.
✅Choose the temperature of the content and control its randomness.
✅Control the length of the content to be generated.
✅Never Worry About Paying Huge Money Monthly To Top Content Creation Platforms
✅100% Easy-to-Use, Newbie-Friendly Technology
✅30-Days Money-Back Guarantee
See My Other Reviews Article:
(1) TubeTrivia AI Review: https://sumonreview.com/tubetrivia-ai-review
(2) SocioWave Review: https://sumonreview.com/sociowave-review
(3) AI Partner & Profit Review: https://sumonreview.com/ai-partner-profit-review
(4) AI Ebook Suite Review: https://sumonreview.com/ai-ebook-suite-review
#AIGenieApp #AIGenieBonus #AIGenieBonuses #AIGenieDemo #AIGenieDownload #AIGenieLegit #AIGenieLiveDemo #AIGenieOTO #AIGeniePreview #AIGenieReview #AIGenieReviewandBonus #AIGenieScamorLegit #AIGenieSoftware #AIGenieUpgrades #AIGenieUpsells #HowDoesAlGenie #HowtoBuyAIGenie #HowtoMakeMoneywithAIGenie #MakeMoneyOnline #MakeMoneywithAIGenie
Code reviews are vital for ensuring good code quality. They serve as one of our last lines of defense against bugs and subpar code reaching production.
Yet, they often turn into annoying tasks riddled with frustration, hostility, unclear feedback and lack of standards. How can we improve this crucial process?
In this session we will cover:
- The Art of Effective Code Reviews
- Streamlining the Review Process
- Elevating Reviews with Automated Tools
By the end of this presentation, you'll have the knowledge on how to organize and improve your code review proces
AI Fusion Buddy Review: Brand New, Groundbreaking Gemini-Powered AI AppGoogle
AI Fusion Buddy Review: Brand New, Groundbreaking Gemini-Powered AI App
👉👉 Click Here To Get More Info 👇👇
https://sumonreview.com/ai-fusion-buddy-review
AI Fusion Buddy Review: Key Features
✅Create Stunning AI App Suite Fully Powered By Google's Latest AI technology, Gemini
✅Use Gemini to Build high-converting Converting Sales Video Scripts, ad copies, Trending Articles, blogs, etc.100% unique!
✅Create Ultra-HD graphics with a single keyword or phrase that commands 10x eyeballs!
✅Fully automated AI articles bulk generation!
✅Auto-post or schedule stunning AI content across all your accounts at once—WordPress, Facebook, LinkedIn, Blogger, and more.
✅With one keyword or URL, generate complete websites, landing pages, and more…
✅Automatically create & sell AI content, graphics, websites, landing pages, & all that gets you paid non-stop 24*7.
✅Pre-built High-Converting 100+ website Templates and 2000+ graphic templates logos, banners, and thumbnail images in Trending Niches.
✅Say goodbye to wasting time logging into multiple Chat GPT & AI Apps once & for all!
✅Save over $5000 per year and kick out dependency on third parties completely!
✅Brand New App: Not available anywhere else!
✅ Beginner-friendly!
✅ZERO upfront cost or any extra expenses
✅Risk-Free: 30-Day Money-Back Guarantee!
✅Commercial License included!
See My Other Reviews Article:
(1) AI Genie Review: https://sumonreview.com/ai-genie-review
(2) SocioWave Review: https://sumonreview.com/sociowave-review
(3) AI Partner & Profit Review: https://sumonreview.com/ai-partner-profit-review
(4) AI Ebook Suite Review: https://sumonreview.com/ai-ebook-suite-review
#AIFusionBuddyReview,
#AIFusionBuddyFeatures,
#AIFusionBuddyPricing,
#AIFusionBuddyProsandCons,
#AIFusionBuddyTutorial,
#AIFusionBuddyUserExperience
#AIFusionBuddyforBeginners,
#AIFusionBuddyBenefits,
#AIFusionBuddyComparison,
#AIFusionBuddyInstallation,
#AIFusionBuddyRefundPolicy,
#AIFusionBuddyDemo,
#AIFusionBuddyMaintenanceFees,
#AIFusionBuddyNewbieFriendly,
#WhatIsAIFusionBuddy?,
#HowDoesAIFusionBuddyWorks
Top Features to Include in Your Winzo Clone App for Business Growth (4).pptxrickgrimesss22
Discover the essential features to incorporate in your Winzo clone app to boost business growth, enhance user engagement, and drive revenue. Learn how to create a compelling gaming experience that stands out in the competitive market.
Software Engineering, Software Consulting, Tech Lead, Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Transaction, Spring MVC, OpenShift Cloud Platform, Kafka, REST, SOAP, LLD & HLD.
OpenMetadata Community Meeting - 5th June 2024OpenMetadata
The OpenMetadata Community Meeting was held on June 5th, 2024. In this meeting, we discussed about the data quality capabilities that are integrated with the Incident Manager, providing a complete solution to handle your data observability needs. Watch the end-to-end demo of the data quality features.
* How to run your own data quality framework
* What is the performance impact of running data quality frameworks
* How to run the test cases in your own ETL pipelines
* How the Incident Manager is integrated
* Get notified with alerts when test cases fail
Watch the meeting recording here - https://www.youtube.com/watch?v=UbNOje0kf6E
GraphSummit Paris - The art of the possible with Graph TechnologyNeo4j
Sudhir Hasbe, Chief Product Officer, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Utilocate offers a comprehensive solution for locate ticket management by automating and streamlining the entire process. By integrating with Geospatial Information Systems (GIS), it provides accurate mapping and visualization of utility locations, enhancing decision-making and reducing the risk of errors. The system's advanced data analytics tools help identify trends, predict potential issues, and optimize resource allocation, making the locate ticket management process smarter and more efficient. Additionally, automated ticket management ensures consistency and reduces human error, while real-time notifications keep all relevant personnel informed and ready to respond promptly.
The system's ability to streamline workflows and automate ticket routing significantly reduces the time taken to process each ticket, making the process faster and more efficient. Mobile access allows field technicians to update ticket information on the go, ensuring that the latest information is always available and accelerating the locate process. Overall, Utilocate not only enhances the efficiency and accuracy of locate ticket management but also improves safety by minimizing the risk of utility damage through precise and timely locates.
Enterprise Resource Planning System includes various modules that reduce any business's workload. Additionally, it organizes the workflows, which drives towards enhancing productivity. Here are a detailed explanation of the ERP modules. Going through the points will help you understand how the software is changing the work dynamics.
To know more details here: https://blogs.nyggs.com/nyggs/enterprise-resource-planning-erp-system-modules/
Transform Your Communication with Cloud-Based IVR SolutionsTheSMSPoint
Discover the power of Cloud-Based IVR Solutions to streamline communication processes. Embrace scalability and cost-efficiency while enhancing customer experiences with features like automated call routing and voice recognition. Accessible from anywhere, these solutions integrate seamlessly with existing systems, providing real-time analytics for continuous improvement. Revolutionize your communication strategy today with Cloud-Based IVR Solutions. Learn more at: https://thesmspoint.com/channel/cloud-telephony
Graspan: A Big Data System for Big Code AnalysisAftab Hussain
We built a disk-based parallel graph system, Graspan, that uses a novel edge-pair centric computation model to compute dynamic transitive closures on very large program graphs.
We implement context-sensitive pointer/alias and dataflow analyses on Graspan. An evaluation of these analyses on large codebases such as Linux shows that their Graspan implementations scale to millions of lines of code and are much simpler than their original implementations.
These analyses were used to augment the existing checkers; these augmented checkers found 132 new NULL pointer bugs and 1308 unnecessary NULL tests in Linux 4.4.0-rc5, PostgreSQL 8.3.9, and Apache httpd 2.2.18.
- Accepted in ASPLOS ‘17, Xi’an, China.
- Featured in the tutorial, Systemized Program Analyses: A Big Data Perspective on Static Analysis Scalability, ASPLOS ‘17.
- Invited for presentation at SoCal PLS ‘16.
- Invited for poster presentation at PLDI SRC ‘16.
Launch Your Streaming Platforms in MinutesRoshan Dwivedi
The claim of launching a streaming platform in minutes might be a bit of an exaggeration, but there are services that can significantly streamline the process. Here's a breakdown:
Pros of Speedy Streaming Platform Launch Services:
No coding required: These services often use drag-and-drop interfaces or pre-built templates, eliminating the need for programming knowledge.
Faster setup: Compared to building from scratch, these platforms can get you up and running much quicker.
All-in-one solutions: Many services offer features like content management systems (CMS), video players, and monetization tools, reducing the need for multiple integrations.
Things to Consider:
Limited customization: These platforms may offer less flexibility in design and functionality compared to custom-built solutions.
Scalability: As your audience grows, you might need to upgrade to a more robust platform or encounter limitations with the "quick launch" option.
Features: Carefully evaluate which features are included and if they meet your specific needs (e.g., live streaming, subscription options).
Examples of Services for Launching Streaming Platforms:
Muvi [muvi com]
Uscreen [usencreen tv]
Alternatives to Consider:
Existing Streaming platforms: Platforms like YouTube or Twitch might be suitable for basic streaming needs, though monetization options might be limited.
Custom Development: While more time-consuming, custom development offers the most control and flexibility for your platform.
Overall, launching a streaming platform in minutes might not be entirely realistic, but these services can significantly speed up the process compared to building from scratch. Carefully consider your needs and budget when choosing the best option for you.
Need for Speed: Removing speed bumps from your Symfony projects ⚡️Łukasz Chruściel
No one wants their application to drag like a car stuck in the slow lane! Yet it’s all too common to encounter bumpy, pothole-filled solutions that slow the speed of any application. Symfony apps are not an exception.
In this talk, I will take you for a spin around the performance racetrack. We’ll explore common pitfalls - those hidden potholes on your application that can cause unexpected slowdowns. Learn how to spot these performance bumps early, and more importantly, how to navigate around them to keep your application running at top speed.
We will focus in particular on tuning your engine at the application level, making the right adjustments to ensure that your system responds like a well-oiled, high-performance race car.
Introducing Crescat - Event Management Software for Venues, Festivals and Eve...Crescat
Crescat is industry-trusted event management software, built by event professionals for event professionals. Founded in 2017, we have three key products tailored for the live event industry.
Crescat Event for concert promoters and event agencies. Crescat Venue for music venues, conference centers, wedding venues, concert halls and more. And Crescat Festival for festivals, conferences and complex events.
With a wide range of popular features such as event scheduling, shift management, volunteer and crew coordination, artist booking and much more, Crescat is designed for customisation and ease-of-use.
Over 125,000 events have been planned in Crescat and with hundreds of customers of all shapes and sizes, from boutique event agencies through to international concert promoters, Crescat is rigged for success. What's more, we highly value feedback from our users and we are constantly improving our software with updates, new features and improvements.
If you plan events, run a venue or produce festivals and you're looking for ways to make your life easier, then we have a solution for you. Try our software for free or schedule a no-obligation demo with one of our product specialists today at crescat.io
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
3. JSON in RDBMS
Why
• Consistency (integrity, transaction ACIDity) for storing JSON documents
• Denormalization of complex objects
What
• Logs with responses/requests received/sent during software interaction
• Configuration data, key/value user preferences
• Unstructured or semi-structured complex objects
and please forget about any analytics by JSON fields
4. DB configuration
• Install JSON fixes on regular base
• Retain scripts to check old issues – new fixes restores them often
• These patches MUST be installed
1. Patch 20080249: JSON Patch Bundle 1
2. Patch 20885778: JSON Patch Bundle 2
3. Patch 24836374: JSON Patch Bundle 3
Oracle thinks JSON is stable now so no more dedicated JSON Bundle patches!
Fixes are inside ordinal database proactive bundle patches (Doc ID 1937782.1).
7. Table structure
BLOB for JSON benefits:
• Twice less space consumption
• less I/O due less space
• No implicit character-set conversion if the database
character set is not AL32UTF8
RFC 4627
21. Ingestion nuances
ALTER SYSTEM SET EVENTS 'TRACE [DIRPATH_LOAD] DISK=high';
ALTER SESSION SET EVENTS '10357 trace name context forever, level 1'; Trace out of data
22. Retrieval
• Extract 1 row with raw JSON data and pass it to application server as is
• Issue SQL statement which extracts 1 row from table and parses it via Oracle
JSON feature
• Create a view which encapsulates JSON treatment and extract a row from the view
24. Retrieval
Bad approach – each json_value function parses JSON again!
The same for .notation!
In Oracle 12.2 it works fine + query rewrite magic!
25. Retrieval
Storage logic could be encapsulated inside virtual columns
Switch Adaptive Optimization off
or
Patch 24563796: WRONG RESULT IN COLUMN VALUE IN 12C should be applied!
31. Retrieval issues
1. Often become non mergeable so performance degrades
2. ORA-600 and ORA-7445 No Data to be read from socket arise in arbitrary
places
3. Count(distinct <field>) fails with ORA-7445 No Data to be read from
socket.
Could be fixed by removing group by, adding row_number() over (partition by <group by fields>) rn and
filtering out records where rn <> 1
4. If 2 or more json_table are used exception doesn’t occur but results could be
wrong in aggregate functions.
33. Retrieval issues mitigation
1. Try to rewrite to use one json_table statement only
2. If it doesn’t help or impossible try /*+ NO_MERGE */ hint
1. If /*+ NO_MERGE */ doesn’t help use /*+
NO_QUERY_TRANSFORMATION*/
70. Extra fast search
Put FTS index IN-MEMORY
1. Allocate memory area KEEP pool
2. Pin in KEEP pool:
− DR$ tables
− DR$ indexes
− LOB segments of DR$ tables
3. Load once by stored procedure or do nothing – IN-MEMORY will be populated by itself
Until server reboot
Execution and search time: ~1 second!
P.S. alter index <index name> rebuild kills all KEEP POOL settings. Use CTX_DDL.SYNC only!
71. Maintenance
Posfix/prefix columns with JSON data via _JSON like INVOICE_JSON before.
• Create daily checks
1. If you need control JSON format (strict/lax) use dba_tab_columns and
all_json_columns views to check JSON constrains
2. If you need insert performance check dba_lobs to check cache attribute
• Check CONTEXT indexes are in proper state especially after truncate or huge
delete operation. If bad – rebuild with KEEP pool fix.
72. Maintenance
Provide regular indexes optimization
1. Collect fragmented indexes (estimated row fragmentation). Up to 10x
performance degradation in case of fragmentation
2. Collect indexes with many deleted rows (estimated garbage size)
3. Run ctx_ddl.optimize_index in FULL mode (SERIAL or PARALLEL)
73. Conclusion
• JSON is always tradeoff between performance/data treatment convenience/integrity
• Indexing strategy should be checked very careful you use 2 notations especially
• JSON treatment is mostly acceptable in row-per-row scenario
• JSON features are still non-stable
• Oracle fails with JSON more 2 Mb very often
• Current implementation doesn’t look like “document-stored” DB
• Tailored search solutions bring better performance
and we are waiting Oracle 12.2
Добрый день. Это версия презентации, которую я читал на PgDay для ценителей. В ней как всегда ещё больше забавных тонкостей, без которых не обходится не один production. One of our projects uses JSON extensively so we have a lot of stuff to share with you . We found quite a few JSON caveats you whether may or not face with in Oracle 12.1. We will cover the best part of issues you could face with starting from a decision to use JSON in Oracle to daily maintenance and will try to advice how to process JSON efficiently. Please pay attention some advices will be useless in Oracle 12.2 especially related to search.
We will try to go through full JSON field lifecycle. Please pay attention some of principles being mentioned will not be connected to JSON but without them good DB performance will be impossible. I divided retrieval and search which look similar but the difference will be clear further. Sometimes we will return to plan point because of additional workarounds.
Let’s discuss what for do we need JSON in RDBMS
These objects is something like tags, questionnaires, documents with many levels of nesting which requires huge number of relational tables. Key idea – avoid extra joins.
Please don’t forget about Support – check latest JSON fixes.
Without installed patches there a tons of errors: ora-600, no read data from the socket, 2 json_tables don’t work in different session, FORMAT JSON should be mentioned in case of BLOB fields.
I recommend that patch – JSON works better definetly.
Let’s decide how we would like to store our JSON. Which table is better?
DB encoding is Oracle-recommended AL32UTF8
Let’s add 10000 records in each table and check space consumption.
Universal Coded Character Set - ucs
So final structure recommended by Oracle is
We created a table with BLOB, put but JAVA side failed with number conversion.
Let’s add is json constraint to ensure no wrong data in our table.
We created a table with BLOB, put but JAVA side failed with number conversion.
Let’s add is json constraint to ensure no wrong data in our table.
Let’s check in any online JSON validator
Why Oracle passes invalid JSON? Is it actually invalid?
50/50
Oracle default is Lax. If you need to be completely sure for any JSON parser use STRICT.
If we use . Notation it fails so have to rewrite all json which leads to huge I/O in case of JSON + sophisticated PL/SQL processing in case of JSON more 4000 bytes. If we need update JSON we create small Java utilities which run in our CI tool. It works faster that sql/plsql in Java bulk processing especially.
We created perfect looking json table structure and started to ingest our data.
Let’s measure just time, without context switches and e.t.c. otherwise we have no time to check all stuff during the presentation. We will use rather simple script.
It is slow and let’s omit BULK. Even if we add it the performance is low. At least I state it is low. What the reason? How to defeat it?
Let’s remove our constraints we just created to see which one contributes more
We have decent speed but it is still an issue.
Is it possible to get more? Any ideas? Without bulk, pure sql and obvious stuff?
Let me give you a glue
What Oracle documentation says about it?
So ingestion is twice faster with CACHE and without constraints
Frankly speaking we don’t switch off constraint but having known our data we use CACHE for best part of our BLOB intended for JSON.
I hope you all have tests of your database structure so add something like it there and discuss with your developers why they didn’t set cache for the particular BLOB
Итак, создадим таблицу из которой будем загружать данные.
Смотрим план выполнения запроса. Все ожидаемо.
Итак, внесём интриги. Добавим констрейнт в таблицу-приёмник.
Казалось бы, всё хорошо. Проблемы нет, но...
Итак, создадим таблицу из которой будем загружать данные.
Продолжим наш эксперимент
Удалим констрейнт на приемнике и добавим в наш запрос условие, никак не связаное с джейсоном
А теперь удаляем констрейнт на источнике
И о чудо.direct path возвращается
Retrieval – extract information from JSON
There are many options.
The first way is common way before Oracle 12 and it works fine. Let’s consider others 2.
The simplest option – use . Notation. Let’s check.
Doesn’t work. Let’s add an alias.
Doesn’t work as well.
Let’s add a check and it works now
Let’s try to work with json_value and complex structures
.notation doesn’t work with arrays in Oracle 12.1 but does work in Oracle 12.2
p.s. oracle 12.2 rewrites many json_value to a single json_table call
NOTES about new patch
JSON table looks very promising. Single parsing, a little verbose syntax, arrays work perfect so sounds cool! Isn’t it?
Let’s have a look into other example which seems very straighforward
We have 2 json_table which takes a lot of effort from CPU
We got the result but what’s the price – 2 parse and unreadable sql!
Let’s have a look into other example which seems very straighforward
We have 2 json_table which takes a lot of effort from CPU
We got the result but what’s the price – 2 parse and unreadable sql!
Let’s make it simpler – json_table works for the same line so could be used in a such manner
We have 2 json_table which takes a lot of effort from CPU + risks of issues due many json_table
So it is the proper way and 1 sql call
It is rather hard to show issues with view without showing production data but we collected all issues
Мы используем следующую последовательность действий, когда json начинается странно себя вести в сложных запросах
For sure if there is no index we have full scans which is bad actually.
Let’s create the index. Plan is the same.
The reason is filter condition
So we have to create indexes completely equal to filter conditions and we have them in the plan
Let’s check .notation – we should ensure the index is used so we see that it is done internally via JSON_QUERY rather than JSON value so we need to create separate set of indexes.
Let’s do it
Let’s run and see what we get.
What do you think?
Exception!!!
Once we add the alias the index works fine
I hope you knows how these indexes work inside – they are virtual columns where extended statistic is gather but it is another story
Nevertheless it is possible to have only 1 index. Once we have it we could use . notation but we make json field more rigid.
Creating indexes we could maintain sort of json schema
There are 2 options for multiple column indexing. The straightforward and not.
Queries are a little bit ugly
Let’s try more appropriate way. As we seen for indexes they work using virtual columns. Let’s use the same approach.
Once we add columns index them as the ordinal ones.
Let’s try more appropriate way. As we seen for indexes they work using virtual columns. Let’s use the same approach.
Once we add columns index them as the ordinal ones.
Let’s start the latest case.
Which index will be used?
What should we do? Let’s move to the next slide.
Let’s create an index without dropping others indexes and check all our queries.
Even json table as well as others statements and multi-column queries work properly.
Even json table as well as others statements and multi-column queries work properly.
It sounds very good to be in our slides but for sure there are caveats.
How many rows do you expect to see?
Let’s have a look into the plan. The plan has CONTAIN which is Oracle Text operator and _ is a placeholder. Moreover _ is a letter delimiter in default lexert that’s why the second query returns 2 row. There is 2 tokens in reverted index.
So we should instruct Oracle text to search by whole phrase. The behavior changes from patch to patch.
So Oracle full text indexes are rather tricky. There are quite a few of nuances like bitmap index conversion, filter by indexes but it is out of current scope.
How many rows do you expect to see?
Let’s have a look into the plan. The plan has CONTAIN which is Oracle Text operator and _ is a placeholder. Moreover _ is a letter delimiter in default lexert that’s why the second query returns 2 row. There is 2 tokens in reverted index.
So we should instruct Oracle text to search by whole phrase. The behavior changes from patch to patch.
So Oracle full text indexes are rather tricky. There are quite a few of nuances like bitmap index conversion, filter by indexes but it is out of current scope.
How many rows do you expect to see?
Let’s have a look into the plan. The plan has CONTAIN which is Oracle Text operator and _ is a placeholder. Moreover _ is a letter delimiter in default lexert that’s why the second query returns 2 row. There is 2 tokens in reverted index.
So we should instruct Oracle text to search by whole phrase. The behavior changes from patch to patch.
So Oracle full text indexes are rather tricky. There are quite a few of nuances like bitmap index conversion, filter by indexes but it is out of current scope.
How many rows do you expect to see?
Let’s have a look into the plan. The plan has CONTAIN which is Oracle Text operator and _ is a placeholder. Moreover _ is a letter delimiter in default lexert that’s why the second query returns 2 row. There is 2 tokens in reverted index.
So we should instruct Oracle text to search by whole phrase. The behavior changes from patch to patch.
So Oracle full text indexes are rather tricky. There are quite a few of nuances like bitmap index conversion, filter by indexes but it is out of current scope.
How many rows do you expect to see?
Let’s have a look into the plan. The plan has CONTAIN which is Oracle Text operator and _ is a placeholder. Moreover _ is a letter delimiter in default lexert that’s why the second query returns 2 row. There is 2 tokens in reverted index.
So we should instruct Oracle text to search by whole phrase. The behavior changes from patch to patch.
So Oracle full text indexes are rather tricky. There are quite a few of nuances like bitmap index conversion, filter by indexes but it is out of current scope.
Let’s play with indexes and json where there is a lot of nested elements.
We need to Please pay attention we have 2 of them.
Let’s try to find data by equal queries. All of them return 2 records which looks like or. Oracle textcontains deals with tokens as we know so if it finds a token in a document it is happy. There is no way to fix a a JSON object boundaries.
Let’s try to use JSON table. It works. But could we make the search faster?
And we have a good plan which limits by tokens and next filter by expression.
Moreover we don’t waste time for JSON_TABLE so it is faster from I/O and CPU prospectives
Let’s check how data ingestion works with FTS indexes
Let’s add a new record. How many record will we see? Zero!
Why? JSON indexes are fts so need be refreshed separately. Once we did it we see a record.
How to address the issue? Let’s have a look into the dictionary.
Let’s fix it. We need to issue replace_metadata mentioning REPLACE METADATA once again. Let’s repeat the query and will see new sync type.
I deleted the record with ID 8888888 and repeated insert from previous page. How many records do we get? So the conclusion is if you need to use context indexes after you inserted data you need something else but we discuss it further.
Let’s check the performance and run our well know script with intermediary commit. Execution time is huge comparing first execution (about 2 seconds) with json strict and unique identifiers.
So the question: which command takes the best part of time if we have a look into database logs? No, it will be other which is invoked before each commit implicitly.
Let’s defer index update without inventing a wheel like hand-made job procedures.
If we have a look in details it is done by job but the profit is when you drop the index it will be dropped automatically + refresh metadata are in the index itself. One place. So we run the script, it works slowly but definitely better even in conjunction with refresh time.
Если у Вас нет грантов на create job, то создание индекса упадёт. Индекс создастся, но полнотекстовые объекты частично остаются. Приготовьте скрипты для очистки CTXSYS схемы.
So we run the script, it works slowly but definitely better even in conjunction with refresh time.
But what to do if we need an access indexed data transactionaly? Oracle does provide something for it. Let’s test. In order to do it drop the index and don’t forget FORCE option otherwise Oracle fails and could live orphan CTXSYS dictionary records.
So let’s check how TRANSACTIONAL works. It takes information from Oracle Text staging tables before they come in the reverted index actually but it slow the queries down for sure. Usualy they work 2 times slower when decent volume of data is out of sync. It is instruction how to process CONTAINTS rather than SYNC. We still need it.
Please pay attention – there is no COMMIT inside.
Please pay attention – there is no COMMIT inside.
So let’s check how TRANSACTIONAL works. It takes information from Oracle Text staging tables before they come in the reverted index actually but it slow the queries down for sure. Usualy they work 2 times slower when decent volume of data is out of sync. It is instruction how to process CONTAINTS rather than SYNC. We still need it.
Let’s ensure we get our records via full text search commands without committing them. Let’s commit records. Nothing helps. But Oracle can’t lie.
The conclusion – if you need transactional data access for json data you should invent something by yourself.
Transactional option doesn’t work but we would like to have faster insert. Oracle 12 has new feature which works in terms of internals like TRANSACTIONAL i.e. deals with staging tables but with the new ones. In terms of update of the reverted index it works in async mode. It is name STAGE_ITAB. Let’s have a look in details.
The first command instructs Oracle a new temporary table should be created with $G prefix equal to inverted index.
The next assigns the storage to the new index and creates a job which moves data to the index. What is interesting the job is without a schedule – is invoked taking into account the DB workload.
3rd states data from staging should be moved to reverted index in async mode.
Let’s have a look into the dictionary
Let’s check how it works using our famous statement. It sounds good regarding we could query after commit without additional sync. For sure we still out of transaction context but still…
There is yet another mean how to make the search extra fast but we’re running out of time.
Let’s move to the last but not least part of our presentation.
SP should load lobs, count(*) and tables thereshold otherwise buffer cache is neglected
Context indexes become fragmented. Why? Unfortunately we have no time discussing it but we should have them less fragmented to provide fast search.
Serial optimization makes indexes less fragmented but parallel is faster
For sure it is our experience but inside DataArt we will use these guidelines further
You already seen that proper json reinforced by constraints works slow but without constraints .notation doesn’t work for instance.
The safest JSON treatment – extract one row, use json features as json_table, json_exist and etc
Oracle json isn’t’ still stable even all patches are installed.
----
We try to use Solr or Elastic even if we have delays. Search speed SLA is more strict that SLA to see fresh data