The Internet of Things (IoT) has deep use cases - energy grids, communications, policing, security, and manufacturing. I’ll show how to use Oracle 19c’s Fast Ingest and Fast Lookup features to load IoT data from “edge” sources to take immediate advantage of that information in nearly real time.
Starting with 12c Release 1, Oracle introduced a completely new architecture concept for its database - the Container Database.
With this new architecture, new challenges came up but with the same breath a wide branch of new opportunities.
The presentation will address the capabilities to create fast and easy new (test) databases or clones for a running production database. Five different ways will be discussed.
- Using Local and Remote Cloning
- Using an Unplugged PDB (predefined master)
- Using Refreshable PDBs as a master for new (test) databases
- Snapshot Carousel
Another point of the agenda is the usage of the Snapshot features of ACFS and Direct NFS to speed up the creation process.
Oracle RAC 12c Practical Performance Management and Tuning as presented during Oracle Open World 2013 with Michael Zoll.
This is part three of the Oracle RAC 12c "reindeer series" used for OOW13 Oracle RAC-related presentations.
This part concludes the main part of the "reindeer series" except for one bonus track "Oracle Multitenant meets Oracle RAC 12c" (available via SlidesShare, too).
This presentation talks about the different ways of getting SQL Monitoring reports, reading them correctly, common issues with SQL Monitoring reports - and plenty of Oracle 12c-specific improvements!
Understanding my database through SQL*Plus using the free tool eDB360Carlos Sierra
This session introduces eDB360 - a free tool that is executed from SQL*Plus and generates a set of reports providing a 360-degree view of an Oracle database; all without installing anything on the database.
If using Oracle Enterprise Manager (OEM) is off-limits for you or your team, and you can only access the database thorough a SQL*Plus connection with no direct access to the database server, then this tool is a perfect fit to provide you with a broad overview of the database configuration, performance, top SQL and much more. You only need a SQL*Plus account with read access to the data dictionary, and common Oracle licenses like the Diagnostics or the Tuning Pack.
Typical uses of this eDB360 tool include: databases health-checks, performance assessments, pre or post upgrade verifications, snapshots of the environment for later use, compare between two similar environments, documenting the state of a database when taking ownership of it, etc.
Once you learn how to use eDB360 and get to appreciate its value, you may want to execute this tool on all your databases on a regular basis, so you can keep track of things for long periods of time. This tool is becoming part of a large collection of goodies many DBAs use today.
During this session you will learn the basics about the free eDB360 tool, plus some cool tricks. The target audience is: DBAs, developers and consultants (some managers could also benefit).
Starting with 12c Release 1, Oracle introduced a completely new architecture concept for its database - the Container Database.
With this new architecture, new challenges came up but with the same breath a wide branch of new opportunities.
The presentation will address the capabilities to create fast and easy new (test) databases or clones for a running production database. Five different ways will be discussed.
- Using Local and Remote Cloning
- Using an Unplugged PDB (predefined master)
- Using Refreshable PDBs as a master for new (test) databases
- Snapshot Carousel
Another point of the agenda is the usage of the Snapshot features of ACFS and Direct NFS to speed up the creation process.
Oracle RAC 12c Practical Performance Management and Tuning as presented during Oracle Open World 2013 with Michael Zoll.
This is part three of the Oracle RAC 12c "reindeer series" used for OOW13 Oracle RAC-related presentations.
This part concludes the main part of the "reindeer series" except for one bonus track "Oracle Multitenant meets Oracle RAC 12c" (available via SlidesShare, too).
This presentation talks about the different ways of getting SQL Monitoring reports, reading them correctly, common issues with SQL Monitoring reports - and plenty of Oracle 12c-specific improvements!
Understanding my database through SQL*Plus using the free tool eDB360Carlos Sierra
This session introduces eDB360 - a free tool that is executed from SQL*Plus and generates a set of reports providing a 360-degree view of an Oracle database; all without installing anything on the database.
If using Oracle Enterprise Manager (OEM) is off-limits for you or your team, and you can only access the database thorough a SQL*Plus connection with no direct access to the database server, then this tool is a perfect fit to provide you with a broad overview of the database configuration, performance, top SQL and much more. You only need a SQL*Plus account with read access to the data dictionary, and common Oracle licenses like the Diagnostics or the Tuning Pack.
Typical uses of this eDB360 tool include: databases health-checks, performance assessments, pre or post upgrade verifications, snapshots of the environment for later use, compare between two similar environments, documenting the state of a database when taking ownership of it, etc.
Once you learn how to use eDB360 and get to appreciate its value, you may want to execute this tool on all your databases on a regular basis, so you can keep track of things for long periods of time. This tool is becoming part of a large collection of goodies many DBAs use today.
During this session you will learn the basics about the free eDB360 tool, plus some cool tricks. The target audience is: DBAs, developers and consultants (some managers could also benefit).
Make Your Application “Oracle RAC Ready” & Test For ItMarkus Michalewicz
This presentation talks about the secrets behind Oracle RAC’s horizontal scaling algorithm, Cache Fusion, and how you can ensure that your application is “Oracle RAC ready.”. It discusses do's and don'ts and how to test your application for "Oracle RAC readiness". This version was first presented in Sangam19.
Oracle Cloud is Best for Oracle Database - High AvailabilityMarkus Michalewicz
This presentation looks behind the covers and evaluates the offerings provided by various cloud vendors and compares them to the Oracle Database offerings available in the Oracle Cloud. The comparison includes Oracle Database in general, focusing on High Availability (HA) and Disaster Recovery (DR), as those areas have historically distinguished the Oracle Database from other databases and will likely continue to be some of the most distinguishing features when it comes to operating the Oracle Database in the cloud.
Any DBA from beginner to advanced level, who wants to fill in some gaps in his/her knowledge about Performance Tuning on an Oracle Database, will benefit from this workshop.
This is a introduction to PostgreSQL that provides a brief overview of PostgreSQL's architecture, features and ecosystem. It was delivered at NYLUG on Nov 24, 2014.
http://www.meetup.com/nylug-meetings/events/180533472/
Adapting and adopting SQL Plan Management (SPM) to achieve execution plan stability for sub-second queries on a high-rate OLTP mission-critical application
The biggest headine at the 2009 Oracle OpenWorld was when Larry Ellison announced that Oracle was entering the hardware business with a pre-built database machine, engineered by Oracle. Since then businesses around the world have started to use these engineered systems. This beginner/intermediate-level session will take you through my first 100 days of starting to administer an Exadata machine and all the roadblocks and all the success I had along this new path.
There exist some valid reasons to rebuild indexes on an Oracle database (not many). This presentation is about some of those reasons and how to automate such online index rebuild.
Make Your Application “Oracle RAC Ready” & Test For ItMarkus Michalewicz
This presentation talks about the secrets behind Oracle RAC’s horizontal scaling algorithm, Cache Fusion, and how you can ensure that your application is “Oracle RAC ready.”. It discusses do's and don'ts and how to test your application for "Oracle RAC readiness". This version was first presented in Sangam19.
Oracle Cloud is Best for Oracle Database - High AvailabilityMarkus Michalewicz
This presentation looks behind the covers and evaluates the offerings provided by various cloud vendors and compares them to the Oracle Database offerings available in the Oracle Cloud. The comparison includes Oracle Database in general, focusing on High Availability (HA) and Disaster Recovery (DR), as those areas have historically distinguished the Oracle Database from other databases and will likely continue to be some of the most distinguishing features when it comes to operating the Oracle Database in the cloud.
Any DBA from beginner to advanced level, who wants to fill in some gaps in his/her knowledge about Performance Tuning on an Oracle Database, will benefit from this workshop.
This is a introduction to PostgreSQL that provides a brief overview of PostgreSQL's architecture, features and ecosystem. It was delivered at NYLUG on Nov 24, 2014.
http://www.meetup.com/nylug-meetings/events/180533472/
Adapting and adopting SQL Plan Management (SPM) to achieve execution plan stability for sub-second queries on a high-rate OLTP mission-critical application
The biggest headine at the 2009 Oracle OpenWorld was when Larry Ellison announced that Oracle was entering the hardware business with a pre-built database machine, engineered by Oracle. Since then businesses around the world have started to use these engineered systems. This beginner/intermediate-level session will take you through my first 100 days of starting to administer an Exadata machine and all the roadblocks and all the success I had along this new path.
There exist some valid reasons to rebuild indexes on an Oracle database (not many). This presentation is about some of those reasons and how to automate such online index rebuild.
MySQL 8 -- A new beginning : Sunshine PHP/PHP UK (updated)Dave Stokes
MySQL 8 has many new features and this presentation covers the new data dictionary, improved JSON functions, roles, histograms, and much more. Updated after SunshinePHP 2018 after feedback
MySQL Cluster 7.3 Performance Tuning - Severalnines SlidesSeveralnines
The MySQL Cluster 7.x series introduced a number of features to allow for fine-grained control over the real-time behaviour of the NDB storage engine. New threads have been introduced, and users are able to control placement of these threads, as well as locking the memory such that no swapping occurs. In an ideal run-time environment, CPUs handling data node threads will not execute other threads apart from OS kernel threads or interrupt handling. Correct tuning of certain parameters can be specially important for certain types of workloads.
This presentation covers the different tuning aspects of MySQL Cluster.
- Application design guidelines
- Schema Optimization
- Index Selection and Tuning
- Query Tuning
- OS Tuning
- Data Node internals
- Optimizations for real-time behaviour
This presentation looks closely at how to get the most out of your MySQL Cluster 7.x runtime environment.
MySQL 8.0 is a big advancement over previous versions with a true data dictionary, invisible indexes, histograms, windowing functions, improved JSON support, CATS, and more
This session talks about How PSSDIAG and SQLNexus can help to troubleshoot SQL Performance issues. Moreover, it also talks about some best practices which are important for SQL Perfornance
This presentation was given to the Dublin Node (JS) Community on May 29th 2014.
Presented by: Chris Lawless, Kevin Yu Wei Xia, Fergal Carroll @phergalkarl, Ciarán Ó hUallacháin, and Aman Kohli @akohli
[db tech showcase Tokyo 2017] A11: SQLite - The most used yet least appreciat...Insight Technology, Inc.
More instances of SQLite are used every day, by more people, than all other database engines combined. An yet, SQLite does not get much attention. Many developers hardly know anything about it. This session will review the features of SQLite, how it is different from other database engines, its strengths and its weaknesses, and when SQLite is an appropriate technology and when some other database engine might be a better choice.
MySQL is a SQL database that also does NoSQL. You can access data in the InnoDB or NDB storage engines as a key/value pair at amazing speeds while retaining simultaneous SQL access of the same data. Plus MySQL 5.7 features a new native JSON data type
Modern Oracle DBAs have spent years acquiring extremely valuable skills, even while facing increased responsibility for growing numbers of diverse multi-version databases, demands to transition to public cloud computing Infrastructure, and a never-ending drumbeat for upskilling and relevance in our industry. It’s the perfect time to consider a transition in your career by leveraging your expertise with the Oracle database in a new role as a Data Engineer (DE).
Going Native: Leveraging the New JSON Native Datatype in Oracle 21cJim Czuprynski
Need to incorporate JSON documents into existing Oracle database applications? The new native JSON datatype introduced in Oracle 21c makes it simple to store, access, traverse, and filter the complex data often found within JSON documents, often without any application code changes.
Access Denied: Real-World Use Cases for APEX and Real Application SecurityJim Czuprynski
Limiting users’ access to data is still a thorny issue in many Oracle shops: How do we insure only the right people view - much less change! - only the data they’re allowed to? We’ll show you how we solved those issues for a large government agency with hundreds of external users via Real Application Security (RAS), whether they’re using APEX applications or direct-access tools like SQLcl.
Charge Me Up! Using Oracle ML, Analytics, and APEX For Finding Optimal Charge...Jim Czuprynski
Think finding a close parking space is a challenge? Finding the closest charging station for your EV when you’re running short on battery power will be the next nightmare for drivers in Smart Cities. I’ll show how to use existing Oracle Machine Learning, Analytics, and APEX to find the closest charge point while driving, as well as determine where it makes the most sense to placed charge points to benefit utility customers.
Graphing Grifters: Identify & Display Patterns of Corruption With Oracle GraphJim Czuprynski
Uncovering patterns of suspicious behavior is no longer something only an experienced gumshoe or fraud investigator can ferret out. Using Oracle’s powerful Machine Learning algorithms and Property Graph plug-ins, we’ll show how to quickly identify and display potentially suspicious financial transactions.
So an Airline Pilot, a Urologist, and an IT Technologist Walk Into a Bar: Thi...Jim Czuprynski
It’s no joke: The IT industry is undergoing a maelstrom of change – ever-increasing data volumes, horrendous security incursions, the promise / threat of Cloud-based computing, and a gradual loss of its most talented people through age-based attrition. What’s needed more than ever is a revival of professionalism within our ranks, and it’s time for us to rise up as a community to strive towards that goal. Seriously - if you are just doing your IT job and are perfectly satisfied with your status in our industry, please don’t even bother downloading this presentation. (Just kidding!)
Autonomous Transaction Processing (ATP): In Heavy Traffic, Why Drive Stick?Jim Czuprynski
Autonomous Transaction Processing (ATP) - the second in the family of Oracle’s Autonomous Databases – offers Oracle DBAs the ability to apply a force multiplier for their OLTP database application workloads. However, it’s important to understand both the benefits and limitations of ATP before migrating any workloads to that environment. I'll offer a quick but deep dive into how best to take advantage of ATP - including how to load data quickly into the underlying database – and some ideas on how ATP will impact the role of Oracle DBA in the immediate future. (Hint: Think automatic transmission instead of stick-shift.)
Conquer Big Data with Oracle 18c, In-Memory External Tables and Analytic Func...Jim Czuprynski
There’s an onslaught of Big Data coming to our IT shops - zettabytes of it! – but instead of your application developers struggling to learn new languages and techniques to analyze it, why not leverage Oracle Database 18c?
I'll demonstrate how to tackle handling the coming Big Data tidal wave with the best tool ever designed to filter, sort, aggregate, and report information: Structured Query Language. We’ll also take a closer look at using some new Analytic Functions in 19c to make short work of complex analyses and how best to leverage 18c’s latest Database In-Memory features for External Tables. And we’ll even explore how easy it is to leverage External Tables in Autonomous Data Warehouse using the latest features of DBMS_CLOUD.
Vote Early, Vote Often: From Napkin to Canvassing Application in a Single Wee...Jim Czuprynski
The frenetic pace of application development in modern IT organizations means it’s not unusual to demand an application be built with minimal requirement gathering – literally, from a napkin-based sketch – to a working first draft of the app within extremely short time frames – even a weekend! – with production deployment to follow just a few days later.
I'll demonstrate a real-life application development scenario – the creation of a mobile application that gives election canvassers a tool to identify, classify, and inform voters in a huge suburban Chicago voting district – using the latest Oracle application development UI, data modeling tools, and database technology. Along the way, we’ll show how Oracle APEX makes short work of building a working application while the Oracle DBA leverages her newest tools – SQL Developer and Data Modeler – to build a secure, reliable, scalable application for her development team.
What's Your Super-Power? Mine is Machine Learning with Oracle Autonomous DB.Jim Czuprynski
Artificial Intelligence (AI) and Machine Learning (ML) are a lot like preserving the Earth's environment: Almost everyone is talking about what should be done to save it, but very few people have committed to actually doing something about it. I'll demonstrate a few real-life opportunities to discover unseen patterns and relationships within sample financial and election data by leveraging the AI and ML capabilities already built into Oracle Autonomous Database.
An Autonomous Singularity Approaches: Force Multipliers For Overwhelmed DBAsJim Czuprynski
Autonomous Database Services have expanded well beyond their original scope of heavy analytical workloads (ADW) and hybrid transaction processing / reporting workloads (ATP) to include dedicated Cloud-based instances to eliminate contention between “noisy neighbors” in the same region and domain.
I'll explain how Oracle DBAs at any skill level can immediately leverage Autonomous resources as force multipliers to free them from most mundane administration tasks so they can concentrate on mastering the new skills required to become an Enterprise Data Architect - the emerging post-DBA role – and shift their focus towards building better enterprise systems in concert with their organization’s application developers, business analysts, and business units.
Politics Ain’t Beanbag: Using APEX, ML, and GeoCoding In a Modern Election Ca...Jim Czuprynski
Oracle announced in December 2019 its Spatial and Graph features are now included without additional licensing costs for Oracle databases. This means application developers now have low-cost access to powerful geolocation, routing, and mapping capabilities – a welcome addition for any Application Express (APEX) application that previously shied away from implementing those features. I'll demonstrate a real-life use case – handling the changing demands of a modern election campaign, including managing widely-dispersed volunteers and voters, using geolocation for merchandise distribution, and identifying “flippable” voters with ML and analytics – through a mobile-capable APEX application.
One Less Thing For DBAs to Worry About: Automatic IndexingJim Czuprynski
You’re a busy Oracle DBA. Your phone rings. It’s your most troublesome user, once again complaining that her query is running slow. You take a quick look at the execution plan, find a possible choice for a new index to improve its performance, and drop it in place: Problem solved. Or is it? Even an experienced DBA may not immediately realize the impact that new index will have on the performance of dozens of other queries and DML statements.
Finally, there’s a better way: Let the database decide.
I'll show you how Automatic Indexing (AI) - one of the newest features of Oracle Database 19c – provides an intriguing alternative to reactive performance tuning methodologies for index creation. We’ll look at how AIC reacts to a heavy hybrid application workload and then holistically builds, tests, and implements the most appropriate secondary indexes needed to improve database performance.
Keep Your Code Low, Low, Low, Low, Low: Getting to Digitally Driven With Orac...Jim Czuprynski
In the brave new digitally-driven world, IT organizations can no longer focus on internal-only RDBMS databases as the central pillar of their infrastructure; data must be accessed externally as well, regardless of format or location, with utmost security. Fortunately, Oracle’s Converged Database strategy makes it simple to satisfy these demands. This presentation explores the myriad facets of a Converged Database strategy and what it means for your career’s future path, regardless of whether you’re an application developer or DBA.
Cluster, Classify, Associate, Regress: Satisfy Your Inner Data Scientist with...Jim Czuprynski
The modern data scientist has a daunting task: Probing petabytes of data, figuring out which Machine Learning (ML) algorithms to apply to filter the grain from the chaff, and producing meaningful intelligence on which to base digitally-driven strategies for their organization. This presentation demonstrates how even a fledgling citizen data scientist facing new real-life opportunities to discover unseen patterns and relationships within sample data can quickly leverage the powerful ML capabilities already built into the Oracle Database and available for use at no additional cost.
Where the %$#^ Is Everybody? Geospatial Solutions For Oracle APEXJim Czuprynski
Geospatial use cases are common – closest coffeeshop, efficient delivery routing, other stores near me – and I’ll show you how to use Oracle’s Spatial and Graph feature set to tackle them within a simple-to-build APEX application.
JSON, A Splash of SODA, and a SQL Chaser: Real-World Use Cases for Autonomous...Jim Czuprynski
JSON is the new XML! It’s everywhere, from NoSQL databases to REST APIs. Let me share with you how Oracle’s Autonomous JSON Database (AJD) makes short work of handling JSON-resident information, especially when paired with robust functions and features of Oracle 19c and 21c.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
2. My Credentials
• 40 years of database-centric IT experience
• Oracle DBA since 2001
• Oracle 9i, 10g, 11g, 12c OCP and ADWC
• Oracle ACE Director since 2014
• ODTUG Database Committee Lead and Board Member
• Editor of ODTUG TechCeleration
• Oracle-centric blog (Generally, It Depends)
• Regular speaker at Oracle OpenWorld, CodeONE,
KSCOPE, and international and regional OUGs
E-mail me at jczuprynski@zerodefectcomputing.com
Follow me on Twitter (@JimTheWhyGuy)
Connect with me on LinkedIn (Jim Czuprynski)
3. Edge Computing Opportunities, Everywhere You Look!
Industrial Internet of Things (IIoT),
especially manufacturing
Artificial Intelligence IoT (AIOT)
Elon Musk’s $100M carbon capture prize
Smart Cities: Scooters, Bicycles,
and Traffic Flow
Autonomous delivery vehicles
Improving farming: More
efficient use of resources &
eliminating over-watering
Reducing unnecessary food waste
Improved worker safety for occupations
most affected by climate change
4. The Likely Best Solution Is Obvious.
Electrify Everything!
Renewable energy costs (wind & solar) are now
cheaper than natural gas and coal
Smart Meters for electrical
consumption are ubiquitous
“Green” hydrogen: A raging
(but good!) debate
New safer, compartmentalized
nuclear technology
Shifts in electrical demand cycles as
working from home becomes the
“new normal”
For a brief introduction on this concept, check out this short video
5. Application Server
Oracle 19c Database
Fast Ingest Capabilities of Oracle 19c
Internet of Things
(IoT)
SGA
Large Pool
Rows Batch 1
Rows Batch 2
Rows Batch 3
Rows Batch n
Drainers
Smart Meters
Social Media
Feeds
Images (CCTV)
High-
Frequency
INSERTs
Permanent
Storage
INSERTs
These are deferred from
immediate COMMIT!
Verification of Synchronicity
Background
Processes
6. SwingBench
Load Generator
Oracle 19c Database
Simulating IoT Load Testing For Fast Ingest Evaluation
IoT
SGA_TARGET (8GB)
LARGE_POOL_SIZE
Simulated Smart
Meter Payloads
Complex simulated
transactions built via PL/SQL
procedures …
DATA Tablespace: Table
T_METER_READINGS
MEMOPTIMIZE_
POOL_SIZE (2G)
Single-
Row
INSERTs
Verification via
DBMS_MEMOPTIMIZE
Rows Batch 1
Rows Batch 2
Rows Batch 3
Rows Batch n
… and sent by multiple user
sessions to Oracle 19c
database via SwingBench
workload generator
W00n
W002
W001
W000
7. SIMIOT: A Simple IoT Schema
Entities and Relationships:
SMART_METERS
Contains information about
individual smart meters for
which data is being collected
METER_READINGS
Individual readings for each smart
meter over (often extremely!) short
time intervals
BUSINESS_DESCRIPTIONS
Describes unique business
classifications based on
licensing issued
Test Environment #1:
Oracle 19c (19.3) Developer
Virtual Appliance Image
• VirtualBox 6.1.0
• Linux 7.7
• 8GB memory
• 1 vCPU
Test Environment #2:
Oracle 21c (21.1)
OCI DBCS Instance
• EE - Extreme Performance
• Linux 7.7
• 30GB memory
• 2 OCPUs
8. Preparing For a Fast Ingest Workload: Some Unexpected Surprises (1)
SQL> ALTER SYSTEM SET memoptimize_pool_size = 2G SCOPE=SPFILE;
System altered.
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup;
ORACLE instance started.
Total System Global Area 1.4496E+10 bytes
Fixed Size 9702624 bytes
Variable Size 3892314112 bytes
Database Buffers 1.0570E+10 bytes
Redo Buffers 23851008 bytes
Database mounted.
Database opened.
. . .
Activating the Memory
Optimized Row Store
does require an
instance bounce
. . .
SQL> show parameter pool_size
NAME TYPE VALUE
------------------------------------ ----------- ------------
java_pool_size big integer 0
large_pool_size big integer 0
memoptimize_pool_size big integer 2G
olap_page_pool_size big integer 0
shared_pool_size big integer 0
streams_pool_size big integer 0
SQL> ALTER SESSION SET CONTAINER = pdb19;
Session altered.
. . .
… Awesome!!
Almost there …
SQL> ALTER TABLE simiot.t_meter_readings MEMOPTIMIZE FOR WRITE;
ERROR at line 1: ORA-62172: MEMOPTIMIZE FOR WRITE feature
cannot be enabled on table in encrypted tablespace.
Wait … what?
Seriously?!?
See MOS Note #2396777.1, 18c New Memoptimized Rowstore, for list of supported infrastructure
9. Preparing For a Fast Ingest Workload: Some Unexpected Surprises (2)
SQL> ALTER SYSTEM
SET "_exadata_feature_on” = true
SCOPE=SPFILE;
SQL> SHUTDOWN IMMEDIATE;
SQL> STARTUP;
Here’s a neat trick to fake out your Oracle
19c test environment into thinking it’s
actually an Exadata platform!
(Provenance: @JulianDontcheff)
SQL> ALTER TABLE simiot.t_meter_readings MEMOPTIMIZE FOR WRITE;
ERROR at line 1: ORA-62144: MEMOPTIMIZE FOR WRITE feature not
allowed on table with foreign key constraint.
Huzzah!!
Almost there (again) …
Why you little …
now what?
10. Fast Ingest: Intrinsic Limitations
10
Can I Use Fast Ingest On a Table That … Allowed?
Uses RANGE or HASH partitioning? Yes
Uses REFERENCE, INTERVAL, SYSTEM, or AUTOLIST partitioning? No
Employs a Foreign Key constraint against another table? No
Employs triggers? No
Is stored within an encrypted tablespace? No
Uses any storage compression? No
Leverages IN MEMORY compression? No
Is EXTERNAL, Index-Organized, TEMPORARY, or NESTED? No
Uses a function-based, DOMAIN, BITMAP, or BITMAP JOIN index? No
Has a column with default values, or that’s marked INVISIBLE, VIRTUAL, OR UNUSED? No
Upon further review,
most of these restrictions
make sense
After all, our intent is to
drink from a firehose!
See MOS Note #2605883.1, Limitations for Using Memoptimized Fast Ingest, for full list of limitations
11. Fast Ingest: How Are Transactions Handled?
Feature Advantages Drawbacks
Ingested data is captured in batches within
the Large Pool, but not immediately written
to the database
Ingesting data is quite fast, and huge
volumes of data from numerous sessions
can be captured with extreme efficiency
because the database isn’t processing
individual rows
Should the database instance crash before all
ingested data is written to the database, it is
possible to lose data
Since Fast Ingest is not a transaction in the
traditional Oracle sense, COMMITs don’t
occur during within its context
“Normal” Oracle transaction
mechanisms are bypassed to enable
rapid data capture
• No COMMITs mean no ROLLBACKs, either!
• Parent-child transactions must be
coordinated to ensure against data loss
• Data cannot be queried until it’s actually
been flushed to disk from Fast Ingest
buffers
Index operations and constraint checking
only happens when data is finally written
from the Large Pool Fast Ingest area to disk
Not necessarily a bad thing! Should a primary key violation occur while the
“drainer” background processes are writing
data from the FI buffers to disk, the violating
rows won’t be INSERTed … but no exception
will be raised
Unless the insert fails for valid reasons, the
application itself must ensure that all valid
data have actually been inserted
DBMS_MEMOPTIMIZE procedures can
be called to verify all data has been
written successfully the database
The application itself must now ensure all data
has indeed been written to the database
12. Tracking Status Via V$MEMOPTIMIZE_WRITE_AREA
SQL> SELECT
(total_size / (1024*1024)) total_size_mb
,(used_space / (1024*1024)) used_space_mb
,(free_space / (1024*1024)) free_space_mb
,num_writes
,num_writers
,con_id
FROM v$memoptimize_write_area;
V$MEMOPTIMIZE_WRITE_AREA
contains statistics about the activity of
the background “drainer” processes as
well as how much memory is in use …
TOTAL_SIZE_MB USED_SPACE_MB FREE_SPACE_MB NUM_WRITES NUM_WRITERS CON_ID
------------- ------------- ------------- ---------- ----------- ------
2055 .156784058 2054.84322 0 106 3
. . .
TOTAL_SIZE_MB USED_SPACE_MB FREE_SPACE_MB NUM_WRITES NUM_WRITERS CON_ID
------------- ------------- ------------- ---------- ----------- ------
2055 1.15670776 2053.84329 8830 107 3
. . .
TOTAL_SIZE_MB USED_SPACE_MB FREE_SPACE_MB NUM_WRITES NUM_WRITERS CON_ID
------------- ------------- ------------- ---------- ----------- ------
2055 3.15655518 2051.84344 24681 109 3
… which makes it easy to
see the number of writes
to the database continue
as the generated
workload progresses
13. Verifying the State of Rows Written Via Fast Ingest
DECLARE
applied_hwm NUMBER(15,0);
written_hwm NUMBER(15,0);
BEGIN
applied_hwm := DBMS_MEMOPTIMIZE.GET_APPLY_HWM_SEQID;
written_hwm := DBMS_MEMOPTIMIZE.GET_WRITE_HWM_SEQID;
DBMS_OUTPUT.PUT_LINE(
'Low-Water-Mark for All Rows Applied (Written Globally): '
|| applied_hwm);
DBMS_OUTPUT.PUT_LINE(
'High Water Mark for Rows Written For This Session: '
|| written_hwm);
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE(
'Fatal unexpected error: ' ||
SQLCODE || ' - ' || SQLERRM);
END;
Low-Water-Mark for All Rows Applied (Written Globally): 498413730056
High Water Mark for Rows Written For This Session: 0
Here’s the state of the
LWM and HWM before
a workload begins …
Low-Water-Mark for All Rows Applied (Written Globally): 498536313235
High Water Mark for Rows Written For This Session: 498536328525
… and then after a
workload has
been generated
within this same
session …
Low-Water-Mark for All Rows Applied (Written Globally): 499537736037
High Water Mark for Rows Written For This Session: 498536328525
… and finally, after
more time has
passed, note the
global LWM has
surpassed the
session’s HWM
14. Controlling Fast Ingest Features Via PL/SQL
-----
-- Flush all unwritten data to the database
-- for the +current+ session
-----
BEGIN
DBMS_MEMOPTIMIZE.WRITE_END;
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE(
'Fatal unexpected error: ‘ ||
SQLCODE || ' - ' || SQLERRM);
END;
/
We can force any as-yet unwritten
data to be written to the database at
either a session level …
-----
-- Flush all unwritten data to the database
-- for +all+ sessions
-----
BEGIN
DBMS_MEMOPTIMIZE_ADMIN.WRITES_FLUSH;
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE(
'Fatal unexpected error: ' ||
SQLCODE || ' - ' || SQLERRM);
END;
/
… or for the entire database with
calls to the appropriate procedure
15. Fast Ingest vs. Single-Row Insert: Performance Comparison
Note the difference in
statements executed – a
36% improvement with just
one hint added!
Note there are fewer physical writes overall,
even with many more rows inserted in the
same amount of time
16. SwingBench
Load Generator
Oracle 19c Database
Fast Lookup Concepts
IoT SGA_TARGET (8GB)
LARGE_POOL_SIZE
Simulated Smart
Meter Payloads
T_METER_READINGS
MEMOPTIMIZE_
POOL_SIZE (2G)
Single-
Row
INSERTs
Rows Batch 1
Rows Batch 2
Rows Batch 3
Rows Batch n T_SMARTMETERS
T_METER_READINGS
Analytics
SQL> ALTER TABLE T_METER_READINGS
MEMOPTIMIZE FOR WRITE;
ALTER TABLE T_METER_READINGS MEMOPTIMIZE FOR READ;
ALTER TABLE T_SMARTMETERS MEMOPTIMIZE FOR READ;
17. Fast Lookup: Reporting on Frequently Accessed Data
Ideally, Fast Lookup is best deployed for data that will be
accessed frequently based on exact primary key values
• When a table is in MEMOPTIMIZE READ mode, any Fast Lookup request
against its data builds a hash table of PK values
• Corresponding rows are added to buffers in MEMOPTIMIZEd pool and then
pinned
• The hash table is thus shortcut to reading a row directly from its
corresponding buffer much more quickly than from traditional buffer cache
NOTE: This doesn’t mean Fast Lookup is only useful for tables leveraging Fast Ingest!
Scenario Example
Often-accessed rows Rows containing values most likely to be accessed most frequently –
for example, movies most recently added to a streaming service
Most recently loaded rows Users tend to access data about most recent store sales than
yesterday’s / last week’s sales
Repetitive reads on the same
larger “chunks” of data
Rows containing a nearly-fully-populated VARCHAR2(2000) column
18. Preparing For a Fast Lookup Workload
SQL> ALTER TABLE simiot.t_smartmeters MEMOPTIMIZE FOR READ;
ERROR at line 1: ORA-62142: MEMOPTIMIZE FOR READ feature
requires NOT DEFERRABLE PRIMARY KEY constraint on the table
Doh! That’s right, a
table must have a
primary key for Fast
Lookup to work.
SQL> ALTER TABLE simiot.t_meter_readings MEMOPTIMIZE FOR READ;
ERROR at line 1: ORA-62156: MEMOPTIMIZE FOR READ feature not
allowed on segment with deferred storage
Jeez, now
what?
As of now, there is no MOS Note that contains a full list of limitations!
Look through Oracle Error Messages in the range of ORA-62100 thru ORA-62200 for a complete list.
19. Controlling Fast Lookup Features Via PL/SQL
-----
-- Populate an object into the MORS pool
-- for Fast Lookup
-----
BEGIN
DBMS_MEMOPTIMIZE.POPULATE(
schema_name => 'SIMIOT'
,table_name => 'T_METER_READINGS'
,partition_name => NULL
);
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE(
'Fatal unexpected error: ‘ ||
SQLCODE || ' - ' || SQLERRM);
END;
/
-----
-- Remove an object enabled for Fast Lookup
-- from the MORS pool
-----
BEGIN
DBMS_MEMOPTIMIZE.DROP_OBJECT(
schema_name => 'SIMIOT'
,table_name => 'T_METER_READINGS'
,partition_name => NULL
);
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE(
'Fatal unexpected error: ‘ ||
SQLCODE || ' - ' || SQLERRM);
END;
/
20. Fast Lookup: Indications of Success (1)
SELECT name, value
FROM v$sysstat
WHERE name LIKE '%memopt r%'
ORDER BY name;
NAME VALUE
------------------------------------ ----------
memopt r NO IM tasks accepted 4
memopt r cleanup 1
memopt r entries deleted 36
memopt r lookups 1
memopt r misses 1
memopt r populate tasks accepted 4
System-level statistics for
Fast Lookup all contain
the string ”memopt r” …
SELECT
P.program "Executor"
,N.name "Statistic"
,S.value "Value"
FROM
v$sesstat S
,v$statname N
,v$session P
WHERE S.statistic# = N.statistic#
AND S.sid = P.sid
AND N.name LIKE '%memopt r%'
AND S.value > 0
ORDER BY P.program, N.name;
Executor Statistic Value
------------------------------------- ---------------------------------------- ----------
SQL Developer memopt r NO IM tasks accepted 4
SQL Developer memopt r populate tasks accepted 2
oracle@localhost.localdomain (W000) memopt r blocks populated 457
oracle@localhost.localdomain (W000) memopt r populate 1
oracle@localhost.localdomain (W000) memopt r puts 74314
oracle@localhost.localdomain (W000) memopt r rows populated 74314
oracle@localhost.localdomain (W000) memopt r successful puts 74314
oracle@localhost.localdomain (W001) memopt r cleanup 1
oracle@localhost.localdomain (W001) memopt r entries deleted 32817
oracle@localhost.localdomain (W002) memopt r blocks populated 34734
oracle@localhost.localdomain (W002) memopt r puts 5660881
oracle@localhost.localdomain (W002) memopt r puts:buckets full 681453
oracle@localhost.localdomain (W002) memopt r rows populated 5660881
oracle@localhost.localdomain (W002) memopt r successful puts 5660881
oracle@localhost.localdomain (W002) memopt r successful puts:with evictions 681453
oracle@localhost.localdomain (W002) memopt r tag collisions 104
oracle@localhost.localdomain (W007) memopt r cleanup 1
oracle@localhost.localdomain (W007) memopt r entries deleted 2800857
… and here’s a way to look
at Fast Lookup statistics at
a session level
21. Fast Lookup: Indications of Success (2)
… but an
equality
predicate
definitely
does!
A range scan can’t
take advantage of
Fast Lookup …
22. Analyzing Near-Real-Time Data with Oracle Machine Learning (1)
CREATE OR REPLACE VIEW simiot.solar_superusers AS
SELECT
sm_business_type
,sm_zipcode
,sm_id
,ROUND(AVG(smr_kwh_used),2) avg_kwh_used
,ROUND(AVG(smr_solar_kwh),2) avg_solar_kwh
,ROUND(AVG(smr_solar_kwh) / AVG(smr_kwh_used) ,2) pct_solar
,CASE
WHEN ROUND(AVG(smr_solar_kwh) / AVG(smr_kwh_used) ,2) >= 0.15
THEN 1 ELSE 0
END as solar_superuser
FROM
t_smartmeters
,t_meter_readings
WHERE smr_id = sm_id
GROUP BY sm_business_type, sm_zipcode, sm_id
ORDER BY sm_business_type, sm_zipcode, sm_id
;
We’ll use this
metric to identify
which
SmartMeter
customers are
solar energy
super-users
(defined as 15%
or more of their
energy generated
via solar power
on-site)
23. Analyzing Near-Real-Time Data with OML4SQL (2)
DECLARE
-- Processing variables:
SQLERRNUM INTEGER := 0;
SQLERRMSG VARCHAR2(255);
vcReturnValue VARCHAR2(256) := 'Failure!';
v_setlist DBMS_DATA_MINING.SETTING_LIST;
BEGIN
v_setlist('ALGO_NAME') := 'ALGO_KMEANS';
v_setlist('PREP_AUTO') := 'ON';
v_setlist('KMNS_DISTANCE') := 'KMNS_EUCLIDEAN';
v_setlist('KMNS_DETAILS') := 'KMNS_DETAILS_ALL';
v_setlist('KMNS_ITERATIONS') := '3';
v_setlist('KMNS_NUM_BINS') := '10’;
. . .
We’ll use a k-Means
algorithm to delve into
which Smart Meters are
clustered together, based
on various attributes of the
companies using them …
24. DECLARE
-- Processing variables:
SQLERRNUM INTEGER := 0;
SQLERRMSG VARCHAR2(255);
vcReturnValue VARCHAR2(256) := 'Failure!';
v_setlist DBMS_DATA_MINING.SETTING_LIST;
BEGIN
v_setlist('ALGO_NAME') := 'ALGO_KMEANS';
v_setlist('PREP_AUTO') := 'ON';
v_setlist('KMNS_DISTANCE') := 'KMNS_EUCLIDEAN';
v_setlist('KMNS_DETAILS') := 'KMNS_DETAILS_ALL';
v_setlist('KMNS_ITERATIONS') := '3';
v_setlist('KMNS_NUM_BINS') := '10’;
. . .
Analyzing Near-Real-Time Data with OML4SQL (2)
. . .
DBMS_DATA_MINING.CREATE_MODEL2(
model_name => 'OML4_SIMIOT_CLUSTERING',
mining_function => 'CLUSTERING',
data_query => 'SELECT * FROM simiot.solar_superusers',
set_list => v_setlist,
case_id_column_name => 'SM_ID');
EXCEPTION
WHEN OTHERS THEN
SQLERRNUM := SQLCODE;
SQLERRMSG := SQLERRM;
vcReturnValue :=
'Error creating new OML4SQL model: ' ||
SQLERRNUM || ' - ' || SQLERRMSG;
DBMS_APPLICATION_INFO.SET_MODULE(NULL,NULL);
DBMS_OUTPUT.PUT_LINE(vcReturnValue);
END;
/
… and leverage the view we just
created as the source for each
Smart Meter’s related attributes
25. Analyzing Near-Real-Time Data with OML4SQL (3)
Here’s an example of
using a Radar Area
graph to show the top
15 business types that
are the most egregious
outliers in terms of solar
power used (aka
distance from centroid)
26. So … Where Are the Solar-Powered Superstars?
… Oracle’s powerful Spatial and Graph
toolsets and free APEX plug-ins make
short work of identifying which
businesses are taking maximum
advantage of solar power
With a relatively simple SQL query
against our SmartMeter attributes and
their most recent meter readings …
27. Plans for Future Experimentation
Simulating intense direct streaming from IoT
devices using Oracle Streaming Services (OSS)
Testing with extreme workloads against an
Autonomous Transaction Processing (ATP)
instance
Incorporating a new workload generator for more
complex IoT streams (details coming soon!)
28. Useful Resources and Documentation
City of Chicago Reference Information:
https://data.cityofchicago.org/Community-Economic-Development/Business-Licenses-Current-Active/uupf-x98q/data
Architecture of the Memoptimized Rowstore:
https://docs.oracle.com/en/database/oracle/oracle-database/19/tgdba/tuning-system-global-
area.html#GUID-9752E93D-55A7-4584-B09B-9623B33B5CCF
Todd Sharp’s Blog Posts on Oracle Streaming Service (OSS):
https://blogs.oracle.com/developers/back-to-the-database-part-1-preparing-to-persist-data-from-a-stream
https://blogs.oracle.com/developers/back-to-the-database-part-2-persisting-data-from-a-stream
https://blogs.oracle.com/developers/back-to-the-database-part-3-publishing-database-changes-to-a-stream
29. Articles on the Future of IoT … and Our Planet
IoT and the Potential to Save the World
https://www.iotforall.com/iot-potential-to-save-the-world
What is IIoT? Industrial Internet of Things Explained
https://www.hologram.io/blog/what-is-iiot
IoT For All at CES: AI, IoT, and Robots
https://www.iotforall.com/ai-iot-aiot-ces
How Smart Devices with Edge Analytics are Helping Business
https://www.iotforall.com/how-smart-devices-with-edge-analytics-are-helping-business
How 100% Clean Energy Could Power Our Cities And Towns
https://grist.org/energy/how-100-clean-energy-could-power-our-cities-and-towns
American suburbs are about to look more like European cities
https://www.fastcompany.com/90576432/american-suburbs-are-about-to-look-more-like-european-cities
Miami pilots e-cargo bikes to reduce congestion, pollution
https://www.smartcitiesdive.com/news/miami-e-cargo-bike-pilot-dhl-city-congestion-pollution/578115/
You say old coal plant, I say green hydrogen
https://www.utilitydive.com/news/you-say-old-coal-plant-i-say-green-hydrogen/588499/
Editor's Notes
Directly from Oracle documentation:
The ingested data is batched in the large pool and is not immediately written to the database. Thus, the ingest process is very fast. Very large volumes of data can be ingested efficiently without having to process individual rows. However, if the database goes down before the ingested data is written out to the database files, it is possible to lose data.
Fast ingest is very different from normal Oracle Database transaction processing where data is logged and never lost once "written" to the database (i.e. committed). In order to achieve the maximum ingest throughput, the normal Oracle transaction mechanisms are bypassed, and it is the responsibility of the application to check to see that all data was indeed written to the database. Special APIs have been added that can be called to check if the data has been written to the database.
The commit operation has no meaning in the context of fast ingest, because it is not a transaction in the traditional Oracle sense. There is no ability to rollback the inserts. You also cannot query the data until it has been flushed from the fast ingest buffers to disk. You can see some administrative information about the fast ingest buffers by querying the view V$MEMOPTIMIZE_WRITE_AREA.
You can also use the packages DBMS_MEMOPTIMIZE and DBMS_MEMOPTIMIZE_ADMIN to perform functions like flushing fast ingest data from the large pool and determining the sequence id of data that has been written.
Index operations and constraint checking is done only when the data is written from the fast ingest area in the large pool to disk. If primary key violations occur when the background processes write data to disk, then the database will not write those rows to the database.
Assuming (for most applications but not all) that all inserted data needs to be written to the database, it is critical that the application insert process checks to see that the inserted data has actually been written to the database before destroying that data. Only when that confirmation has occurred can the data be deleted from the inserting process.
From 19c documentation:
Tables with the following characteristics cannot use fast ingest:
Tables with:
disk compression
in-memory compression
column default vales
encryption
functional indexes
domain indexes
bitmap indexes
bitmap join indexes
ref types
varray types
OID$ types
unused columns
virtual columns
LOBs
triggers
binary columns
foreign keys
row archival
invisible columns
Temporary tables
Nested tables
Index organized tables
External tables
Materialized views with on-demand refresh
Sub-partitioning is not supported.
The following partitioning types are not supported.
REFERENCE
SYSTEM
INTERVAL
AUTOLIST
Directly from Oracle 19c documentation:
The ingested data is batched in the large pool and is not immediately written to the database.
Thus, the ingest process is very fast. Very large volumes of data can be ingested efficiently without having to process individual rows.
However, if the database goes down before the ingested data is written out to the database files, it is possible to lose data.
Fast ingest is very different from normal Oracle Database transaction processing where data is logged and never lost once "written" to the database (i.e. committed).
In order to achieve the maximum ingest throughput, the normal Oracle transaction mechanisms are bypassed, and it is the responsibility of the application to check to see that all data was indeed written to the database.
Special APIs have been added that can be called to check if the data has been written to the database.
The commit operation has no meaning in the context of fast ingest, because it is not a transaction in the traditional Oracle sense.
There is no ability to rollback the inserts.
You also cannot query the data until it has been flushed from the fast ingest buffers to disk. You can see some administrative information about the fast ingest buffers by querying the view V$MEMOPTIMIZE_WRITE_AREA.
Parent-child transactions must be synchronized to avoid errors. For example, foreign key inserts and updates of rows inserted into the large pool can return errors, if the parent data is not yet written to disk.
Index operations and constraint checking is done only when the data is written from the fast ingest area in the large pool to disk.
If primary key violations occur when the background processes write data to disk, then the database will not write those rows to the database.
Assuming (for most applications but not all) that all inserted data needs to be written to the database, it is critical that the application insert process checks to see that the inserted data has actually been written to the database before destroying that data. Only when that confirmation has occurred can the data be deleted from the inserting process.
Notes from Kam Discussion:
> Do FORALL bulk INSERTs cloud the effectiveness of MORS vs. single-statement INSERTs?
It’s really designed for single-row INSERTs from multiple sessions.
> What are the thresholds that MORS uses to decide to write transactions to the database?
None, really. When a reserved MORS buffer is filled up, it’s marked for writing and then it is written via W00x processes (really via a job that’s tasked to the W00X process).