The document discusses using virtual columns in Oracle databases to implement business rules and uniqueness constraints across tables in a declarative way. Virtual columns allow expressing attributes as SQL expressions of real columns, enabling indexing and foreign key constraints that check rules involving multiple tables or columns. Business rules that were previously only possible through procedural logic can now be enforced at the database level through virtual columns.
SQLFire is a high-performance, memory-optimized distributed SQL database.
SQLFire databases run on multiple servers simultaneously, but present a standard SQL interface to client applications, and appear to be just one database. SQLFi
re also makes it easy to add or remove servers at any time, which makes redundan
cy and elastic scaling very simple.
This presentation has an overview of SQLFire as well as a walkthrough of the SQL
extensions SQLFire uses to create a real distributed SQL database. Importantly
all of the extensions are in the way tables are defined (i.e. the DDL commands) rather than extentions to data inserts or queries so clients are completely unaw
are of SQLFire's distributed nature.
SQLFire is a memory-optimized distributed SQL database from VMware. SQLFire is built for applications that need higher speed and lower latency than traditional databases can offer, but also require strong support for querying and transactions.
This webinar introduces the basics of SQLFire, including a discussion of why traditional databases are not scalable enough to deal with the demands of modern applications. I cover some of the extensions SQLFire makes to the SQL standard in order to be a truly horizontally-scalable SQL database.
The demo presented with the webinar shows how SQLFire can transparently scale to processes requests faster. In the demo a number of inserts are made, but not before a complex validation processes is done on the data being inserted. As a result the inserts are very slow. With SQLFire though you can simply add or remove nodes at any time, so if you anticipate a period where you need more processing power you can add a node and process inserts faster. SQLFire is designed to be horizontally scalable in all features, so you can scale not only inserts but also queries, transactions, etc.
Full source code for the demo is available (see the slides for details).
Current big data technology scope overview prepared for V.I.Tech and Wellcentive companies. Answers questions why we are taking these products and what do we really do with them on very high level.
SQLFire is a high-performance, memory-optimized distributed SQL database.
SQLFire databases run on multiple servers simultaneously, but present a standard SQL interface to client applications, and appear to be just one database. SQLFi
re also makes it easy to add or remove servers at any time, which makes redundan
cy and elastic scaling very simple.
This presentation has an overview of SQLFire as well as a walkthrough of the SQL
extensions SQLFire uses to create a real distributed SQL database. Importantly
all of the extensions are in the way tables are defined (i.e. the DDL commands) rather than extentions to data inserts or queries so clients are completely unaw
are of SQLFire's distributed nature.
SQLFire is a memory-optimized distributed SQL database from VMware. SQLFire is built for applications that need higher speed and lower latency than traditional databases can offer, but also require strong support for querying and transactions.
This webinar introduces the basics of SQLFire, including a discussion of why traditional databases are not scalable enough to deal with the demands of modern applications. I cover some of the extensions SQLFire makes to the SQL standard in order to be a truly horizontally-scalable SQL database.
The demo presented with the webinar shows how SQLFire can transparently scale to processes requests faster. In the demo a number of inserts are made, but not before a complex validation processes is done on the data being inserted. As a result the inserts are very slow. With SQLFire though you can simply add or remove nodes at any time, so if you anticipate a period where you need more processing power you can add a node and process inserts faster. SQLFire is designed to be horizontally scalable in all features, so you can scale not only inserts but also queries, transactions, etc.
Full source code for the demo is available (see the slides for details).
Current big data technology scope overview prepared for V.I.Tech and Wellcentive companies. Answers questions why we are taking these products and what do we really do with them on very high level.
A straight-forward explanation with an example of how JSR-88 aka Deployment Plans can be used in WebLogic Server to make changes to values in deployment descriptors without modifying application archives.
First slide of Hadoop:
* Introduction to Big Data and Hadoop:
- Presenting and defining big data
- Introducing Hadoop and History
- Hadoop - how it works?
- HDFS
MySQL Cluster 7.2 added support for the Memcached API, enabling web services to directly query MySQL Cluster using the Memcached API, and adding a persistent, scalable, fault tolerant backend to Memcached.
The slides take you through the design concepts and introduce a sample social media app built using memcached and MySQL Cluster
Conference tutorial: MySQL Cluster as NoSQLSeveralnines
Slides from the 'MySQL Cluster as NoSQL' tutorial at Percona Live MySQL Conference 2012 in London.
Tutorial covers:
*MySQL Cluster administration
* NoSQL options for MySQL Cluster and when to use what
* Memcached (installation and configuration)
* Cluster/J
* NDBAPI
* Benchmarking of different access methods on a live cluster
Virtualizing Latency Sensitive Workloads and vFabric GemFireCarter Shanklin
This presentation was made by Emad Benjamin of VMware Technical Marketing. Normally I wouldn't upload someone else's preso but I really insisted this get posted and he asked me to help him out.
This deck covers tips and best practices for virtualizing latency sensitive apps on vSphere in general, and takes a deep dive into virtualizing vFabric GemFire, which is a high-performance distributed and memory-optimized key/value store.
Best practices include how to configure the virtual machines and how to tune them appropriately to the hardware the application runs on.
Tackle Containerization Advisor (TCA) for Legacy ApplicationsKonveyor Community
Recording of presentation: https://youtu.be/VapEooROERw
With the adoption of cloud services and the reliability and resiliency it offers, enterprises are eager to understand how many of their legacy applications can be containerized.
We propose Tackle Containerization Advisor (TCA), a framework that provides a containerization advisory for legacy applications.
Given an application description in terms of its technical components, TCA proposes a multi-step process that standardizes the raw inputs and curates technology stack into various components, detects missing components and finally recommends the best possible containerization approach.
Presenter: Anup Kalia, Research Staff Member @ IBM Research
GitHub: https://github.com/konveyor/tackle-container-advisor
A straight-forward explanation with an example of how JSR-88 aka Deployment Plans can be used in WebLogic Server to make changes to values in deployment descriptors without modifying application archives.
First slide of Hadoop:
* Introduction to Big Data and Hadoop:
- Presenting and defining big data
- Introducing Hadoop and History
- Hadoop - how it works?
- HDFS
MySQL Cluster 7.2 added support for the Memcached API, enabling web services to directly query MySQL Cluster using the Memcached API, and adding a persistent, scalable, fault tolerant backend to Memcached.
The slides take you through the design concepts and introduce a sample social media app built using memcached and MySQL Cluster
Conference tutorial: MySQL Cluster as NoSQLSeveralnines
Slides from the 'MySQL Cluster as NoSQL' tutorial at Percona Live MySQL Conference 2012 in London.
Tutorial covers:
*MySQL Cluster administration
* NoSQL options for MySQL Cluster and when to use what
* Memcached (installation and configuration)
* Cluster/J
* NDBAPI
* Benchmarking of different access methods on a live cluster
Virtualizing Latency Sensitive Workloads and vFabric GemFireCarter Shanklin
This presentation was made by Emad Benjamin of VMware Technical Marketing. Normally I wouldn't upload someone else's preso but I really insisted this get posted and he asked me to help him out.
This deck covers tips and best practices for virtualizing latency sensitive apps on vSphere in general, and takes a deep dive into virtualizing vFabric GemFire, which is a high-performance distributed and memory-optimized key/value store.
Best practices include how to configure the virtual machines and how to tune them appropriately to the hardware the application runs on.
Tackle Containerization Advisor (TCA) for Legacy ApplicationsKonveyor Community
Recording of presentation: https://youtu.be/VapEooROERw
With the adoption of cloud services and the reliability and resiliency it offers, enterprises are eager to understand how many of their legacy applications can be containerized.
We propose Tackle Containerization Advisor (TCA), a framework that provides a containerization advisory for legacy applications.
Given an application description in terms of its technical components, TCA proposes a multi-step process that standardizes the raw inputs and curates technology stack into various components, detects missing components and finally recommends the best possible containerization approach.
Presenter: Anup Kalia, Research Staff Member @ IBM Research
GitHub: https://github.com/konveyor/tackle-container-advisor
HTTP/2 Comes to Java: Servlet 4.0 and what it means for the Java/Jakarta EE e...Edward Burns
Servlet is very easily the most important standard in server-side Java. The much awaited HTTP/2 standard is now complete, was fifteen years in the making and promises to radically speed up the entire web through a series of fundamental protocol optimizations.
In this session we will take a detailed look at the changes in HTTP/2 and discuss how it may change the Java ecosystem including the foundational Servlet 4 specification included in Java/Jakarta EE 8.
The Good, the Bad and the Ugly of Migrating Hundreds of Legacy Applications ...Josef Adersberger
Running applications on Kubernetes can provide a lot of benefits: more dev speed, lower ops costs, and a higher elasticity & resiliency in production. Kubernetes is the place to be for cloud native apps. But what to do if you’ve no shiny new cloud native apps but a whole bunch of JEE legacy systems? No chance to leverage the advantages of Kubernetes? Yes you can!
We’re facing the challenge of migrating hundreds of JEE legacy applications of a major German insurance company onto a Kubernetes cluster within one year. We're now close to the finish line and it worked pretty well so far.
The talk will be about the lessons we've learned - the best practices and pitfalls we've discovered along our way. We'll provide our answers to life, the universe and a cloud native journey like:
- What technical constraints of Kubernetes can be obstacles for applications and how to tackle these?
- How to architect a landscape of hundreds of containerized applications with their surrounding infrastructure like DBs MQs and IAM and heavy requirements on security?
- How to industrialize and govern the migration process?
- How to leverage the possibilities of a cloud native platform like Kubernetes without challenging the tight timeline?
Migrating Hundreds of Legacy Applications to Kubernetes - The Good, the Bad, ...QAware GmbH
CloudNativeCon North America 2017, Austin (Texas, USA): Talk by Josef Adersberger (@adersberger, CTO at QAware)
Abstract:
Running applications on Kubernetes can provide a lot of benefits: more dev speed, lower ops costs, and a higher elasticity & resiliency in production. Kubernetes is the place to be for cloud native apps. But what to do if you’ve no shiny new cloud native apps but a whole bunch of JEE legacy systems? No chance to leverage the advantages of Kubernetes? Yes you can!
We’re facing the challenge of migrating hundreds of JEE legacy applications of a major German insurance company onto a Kubernetes cluster within one year. We're now close to the finish line and it worked pretty well so far.
The talk will be about the lessons we've learned - the best practices and pitfalls we've discovered along our way. We'll provide our answers to life, the universe and a cloud native journey like:
- What technical constraints of Kubernetes can be obstacles for applications and how to tackle these?
- How to architect a landscape of hundreds of containerized applications with their surrounding infrastructure like DBs MQs and IAM and heavy requirements on security?
- How to industrialize and govern the migration process?
- How to leverage the possibilities of a cloud native platform like Kubernetes without challenging the tight timeline?
Patterns and Pains of Migrating Legacy Applications to KubernetesQAware GmbH
Open Source Summit 2018, Vancouver (Canada): Talk by Josef Adersberger (@adersberger, CTO at QAware), Michael Frank (Software Architect at QAware) and Robert Bichler (IT Project Manager at Allianz Germany)
Abstract:
Running applications on Kubernetes can provide a lot of benefits: more dev speed, lower ops costs and a higher elasticity & resiliency in production. Kubernetes is the place to be for cloud-native apps. But what to do if you’ve no shiny new cloud-native apps but a whole bunch of JEE legacy systems? No chance to leverage the advantages of Kubernetes? Yes you can!
We’re facing the challenge of migrating hundreds of JEE legacy applications of a German blue chip company onto a Kubernetes cluster within one year.
The talk will be about the lessons we've learned - the best practices and pitfalls we've discovered along our way.
Patterns and Pains of Migrating Legacy Applications to KubernetesJosef Adersberger
Running applications on Kubernetes can provide a lot of benefits: more dev speed, lower ops costs, and a higher elasticity & resiliency in production. Kubernetes is the place to be for cloud native apps. But what to do if you’ve no shiny new cloud native apps but a whole bunch of JEE legacy systems? No chance to leverage the advantages of Kubernetes? Yes you can!
We’re facing the challenge of migrating hundreds of JEE legacy applications of a German blue chip company onto a Kubernetes cluster within one year.
The talk will be about the lessons we've learned - the best practices and pitfalls we've discovered along our way.
Just over a year ago (before becoming the full time chair and advocate of QCon London, San Francisco, and New York), my main role was with HPE as the principal architect for a client in the US public sector.
The systems we supported were responsible for personnel information, scholarships decisions, and record management. Like so many others, we were also faced with legacy applications, COTS product integrations, polyglot code bases, and often brittle deployments. In an effort to decouple code bases and address some of these issues, we started advocating for a Microservice architecture and trying to distinguish it from the SOA practices of the past.
Now, it’s a year later. I have had the incredible opportunity to have access to architects, engineers, and leaders from some of the world’s more respected software companies. These are companies like Uber, Microsoft, Netflix, Apple, Google, Slack, Pinterest, and Etsy. I’ve had the chance to have one-on-one discussions with Chief Architects, developers, and engineers building the apps I most admire and use every day (some leveraging Microservices, some embracing Monoliths, and others falling somewhere in between).
Patterns & Practices of Microservices is some of the things I wish I knew before beginning a push towards Microservices just over a year ago. It’s the practices of companies leveraging Microservices, it’s the technology tradeoffs when deciding between Monoliths and Microservices, and it’s the advice I’ve heard in interviewing, podcasting, and iterating on presentations from software giants like Adrian Cockcroft, Matt Ranney, Josh Evans, Martin Thompson, and literally hundreds of other engineers who drop knowledge at QCons around the world.
Apache Hive is a rapidly evolving project which continues to enjoy great adoption in the big data ecosystem. As Hive continues to grow its support for analytics, reporting, and interactive query, the community is hard at work in improving it along with many different dimensions and use cases. This talk will provide an overview of the latest and greatest features and optimizations which have landed in the project over the last year. Materialized views, the extension of ACID semantics to non-ORC data, and workload management are some noteworthy new features.
We will discuss optimizations which provide major performance gains as well as integration with other big data technologies such as Apache Spark, Druid, and Kafka. The talk will also provide a glimpse of what is expected to come in the near future.
NewSQL - Deliverance from BASE and back to SQL and ACIDTony Rogerson
There are a number of NewSQL products now on market such as VoltDB and Progres-XL. These promise NoSQL performance and scalability but with ACID and relational concepts implemented with ANSI SQL.
This session will cover off why NoSQL came about, why it's had it's day and why NewSQL will become the backbone of the Enterprise for OLTP and Analytics.
SEASPC 2011 - SharePoint Security in an Insecure World: Understanding the Fiv...Michael Noel
One of the biggest advantage of using SharePoint as a Document Management and collaboration environment is that a robust security and permissions structure is built-in to the application itself. Authenticating and authorizing users is a fairly straightforward task, and administration of security permissions is simplified. Too often, however, security for SharePoint stops there, and organizations don’t pay enough attention to all of the other considerations that are part of a SharePoint Security stack, and more often than not don’t properly build them into a deployment. This includes such diverse categories including Edge, Transport, Infrastructure, Data, and Rights Management Security, all areas that are often neglected but are nonetheless extremely important. This session discusses the entire stack of Security within SharePoint, from best practices around managing permissions and ACLs to comply with Role Based Access Control, to techniques to secure inbound access to externally-facing SharePoint sites. The session is designed to be comprehensive, and includes all major security topics in SharePoint and a discussion of various real-world designs that are built to be secure.
The mainstreaming of containerization and microservices is raising a critical question by both developers and operators: how do we debug all this?
Debugging microservices applications is a difficult task. The state of the application is spread across multiple microservices, and it is hard to get a holistic view of the state of the application. Currently debugging of microservices is assisted by openTracing, which helps in tracing of a transaction or workflow for post-mortem analysis, and linkerd and itsio which monitor the network to identify latency problems. These tools however, do not allow to monitor and interfere with the application during run time.
In this talk, we will describe and demonstrate common debugging techniques and we will introduce Squash, a new tool and methodology.
Introduction to web application development with Vue (for absolute beginners)...Lucas Jellema
In this slide deck I show you how you can easily and quickly create quite rich web applications with Vue 3 – without having to study complex concepts or understand many technical details. I have only recently learned how to work with Vue 3 myself and now is the best time for me to share my learning experience (and my enthusiasm) with you. I know what I found essential to understand and what most got me excited in these early steps (what was a little bit hard to grasp). I believe that I can present my steps and guide you to experience the same fun and have a similarly gratifying experience. I am not an expert in this subject – I have barely learned how to walk and that is why I can help you with these first steps with Vue.
In this deck, I do not explain how Vue works. I do not really know that. I will show you how to work with it and how to create web applications that are functional, appealing, fast and responsive.
The approach I am taking is straightforward:
• I will tell you a little bit about web development, browsers and reactive frameworks
• I will show the hello world of Vue applications
• I will explain about components and nesting, events, data binding and reactive behavior and demonstrate these concepts
• I will introduce Vue UI Component libraries – and with no effort at all we will launch our application to the next level – with rich components to explore, manipulate, visualize data collections
• We will publish the web application from our development environment to where the whole world could see it – using GitHub Pages
• As bonus topic – we discuss state management
At the end of this session you will be able to quickly create a simple yet rich web application with Vue 3. You have a starting point to further evolve your skills with the many online resources I am convinced that you will enjoy your newfound powers and the simplicity and power of Vue 3.
Note: a tutorial accompanies this slide deck - see https://github.com/lucasjellema/code-face-vue3-intro-reactiive-webapps-aug2023/blob/main/README.md
Making the Shift Left - Bringing Ops to Dev before bringing applications to p...Lucas Jellema
Designing, agreeing on, implementing and testing the application is our first challenge. But it does not end there. Applications require tender love and care when they are live. Application Operations needs to be in place along with the functionality of the application. AppOps is the process of making sure that the applications are executed as required and that any problems are detected, reported and dealt with. Some mechanisms used in AppOps: transaction tracing, log analysis, post-data-exchange-checks, health checking of all systems involved, in-production-testing of end-to-end data flows. Additionally, AppOps takes care of configuration management, scaling, cost management, technical life cycle management on solution components. In this session, we will take a closer look at what is required to keep those applications going and how we do ops by design from early on in the agile process.
Lightweight coding in powerful Cloud Development Environments (DigitalXchange...Lucas Jellema
Cloud Based Development environments allow software engineers to work in a new and refreshing way. The development environment runs in the cloud, based on a coded environment definition and with the sources from a specific branch in a Git repository. The environment can be quite powerful in memory, CPU and storage. Development can be done from a lightweight device such as a Chromebook or even a tablet. Switching between different environments becomes a breeze, collaborating in an environment is easily done. Using network tunneling, the IDE could run locally against the remote workspace and remote ports can be accessed on localhost. This session demonstrates both Gitpod and Github Codespaces - similar SaaS offerings with generous free tiers. They are great for quick investigation into new technologies, for working through tutorials and for contributing to open source projects. You will smile at the ease and elegance of engineering your software in this way.
Apache Superset - open source data exploration and visualization (Conclusion ...Lucas Jellema
Introducing Apache Superset - an open source platform for data exploration, visualization and analysis - co-starring Trino and Steampipe for providing SQL access to many non-SQL data sources.
CONNECTING THE REAL WORLD TO ENTERPRISE IT – HOW IoT DRIVES OUR ENERGY TRANSI...Lucas Jellema
Enterprise IT systems are deaf, blind and highly insensitive. They do not know what is going on in the outside world. Through Internet of Things technology, we provide eyes, ears and hands that allow enterprises to learn about and react in real time to events in the physical world. The energy transition at a major Dutch energy company (Eneco) is powered by IoT technology – to steer and sometimes curtail windmills and solar farms and to coordinate local energy production and trade. This session shows you how the physical world was connected to the customer portals and apps, asset management systems and Kafka platform through the Azure cloud based IoT Hub en Edge, digital twin, serverless functions, timeseries datastores and streaming data analysis. It is a story about technological innovation on top of existing foundations and of a vision for business and our society at large.
Help me move away from Oracle - or not?! (Oracle Community Tour EMEA - LVOUG...Lucas Jellema
I hear this aspiration from a growing number of organizations. Sometimes as a quite literal question. This however is merely half of a wish. Apparently, organizations want to quit with one thing — but have not yet stipulated what they desire instead. What is the objective that is pursued here? Only to get rid of Oracle? It will become clear why you should give a considerable thought about dropping Oracle, or any other vendors’ technology, when you’re not pleased with your current IT situation. You need to focus on the actual problems and objectives and define the suitable roadmap to fit your real needs. It turns out that the quest is usually for modernization and flexibility - and Oracle can very well be a part of that future.
Organizations with decades of investment in Oracle technology sometimes (and increasingly) express a wish to move away from Oracle. In this session, we will first explore where the desire to move away from Oracle might come from. Then we describe what the term Oracle represents — more than 2.000 products on all layers in the technology stack and in different business areas. Finally, we map out what the ‘moving away from’ consists of: defining where you ‘move to’ and subsequently actually going there.
It will become clear why you should give considerable thought about dropping Oracle, or any other vendors’ technology, when you’re not pleased with your current IT situation. You need to focus on the actual problems and objectives and define the suitable roadmap to fit your real needs. It turns out that the quest is usually for modernization and flexibility - and Oracle can very well be a part of that future.
Original storyline in this Medium Article: https://medium.com/real-vox/what-if-companies-say-help-me-move-away-from-oracle-ffbbc95afc4f
IoT - from prototype to enterprise platform (DigitalXchange 2022)Lucas Jellema
In 2019 the company started a small scale IoT project: smart meters in consumer homes, a cloud based IoT platform for device management, metrics collecting, monitoring and real time data processing. From the initial 12 devices and this single use case, the initiative has rapidly scaled, to tens of thousands devices - including entire wind parks and solar farms - and seven substantial business cases, not just for harvesting data but increasingly for real time actuation. The IoT Platform is feeding the brain at the heart of the enterprise - through an event streaming platform and an API platform. It supports complex operations with anomaly detection on metrics streams and device and communication monitoring. This session tells about the eye catching business cases - what are business objectives and results - and explains the journey since the start. It continues the story presented at DigitalXchange 2020 - discussing technical challenges and solutions as well as organizational aspects. Areas of particular interest: edge processing, data analytics and machine learning.
Who Wants to Become an IT Architect-A Look at the Bigger Picture - DigitalXch...Lucas Jellema
Pitch: The movie The Matrix made it clear: The Architect is powerful. How to be(come) and IT architect? What do you do, what do you need to know, is it fun and why? Using real world examples, core principles and useful tools, this session introduces the subtle art of designing and realizing flexible IT architectures. </p><p>Taking a step back to get and create an overview, frequently asking why to get to the real intention, bringing aspects such as cost, scale, time and change and business strategy into the design and bridging the gap between business owners, process managers and technical specialists. One way to define the responsibility of an IT architect. In this session, we will discuss what is expected of the architect and what you need to do for that and what you could use to get it done. How do you get started as an architect, how to grow in that role? We discuss a number of real life architectural challenges and solution design. And discuss a number of architecture principles, patterns, and powers to apply. Never stop programming - but perhaps rise to the architecture challenge too.
Notes: Many IT professionals aspire to become architects. Many architects wonder what it is they have to do. After 27 years in IT I find I have slowly and steadily moved into a role that I can probably use the label architect for, although still with some reluctance. What exactly does that mean - IT architect? While I may not have all answers and the ultimate truth and wisdom, I do have many architectural challenges to discuss and some core principles to share and a number of tips, tricks and tools to recommend that will help anyone get started or grow in a role as architect for software and IT systems. Elements that make an appearance include cloud, agile, DevOps, microservices, persistence, business, powers of persuasion, diagramming, cost, security, software engineering, data.
Outline: - two real world examples (one new business initiative, one running and struggling project) and how to approach them with an architect's mind - core principles to apply , patterns to us, what to unearth (the power question of WHY) - architecture products: what do you deliver as an architect; how do you ensure agility? - how to be effective? bringing your design to life - communication with stakeholders/powers of persuasion, monitoring adherence, being pragmatic but not lose grip; - anecdotal evidence from several small and large product teams - the good and also the ugly (architectural oversights and the consequences)
some specific answers to address - how much technical knowledge and programming skills does an architect require? What other knowledge is required and how to stay on top of your game? how to get going: first steps towards be(com)ing and architect?
Steampipe - use SQL to retrieve data from cloud, platforms and files (Code Ca...Lucas Jellema
Introduction to Steampipe - a tool for retrieving data and metadata about cloud resources, platform resources and file content - all through SQL. Data from clouds, files and platforms can be joined, filtered, sorted, aggregated using regular SQL. Steampipe offers a very convenient way to get hold of data that describes the environment in detail.
Automation of Software Engineering with OCI DevOps Build and Deployment Pipel...Lucas Jellema
Automation of software delivery has several advantages. Prevention of human error is certainly one. Consistent and complete execution of tried and tested build and deployment tasks as the only way to apply changes in the live environment. Once the pipelines have been set up, the engineers can focus on the software and applying the required changes to it. To bring that software all the way to production is a breeze. Oracle Cloud Infrastructure offers the DevOps service, introduced in the Summer of 2021. This service comes with git style code repositories, build servers and build pipelines, artifact repositories as well as deployment pipelines. This session introduces OCI DevOps and demonstrates how software can be built and deployed on OKE Kubernetes, Compute Instance VMs and Oracle Functions. From simple source code an application is put in production without manual intervention in the build and deployment process.
Introducing Dapr.io - the open source personal assistant to microservices and...Lucas Jellema
Dapr.io is an open source product, originated from Microsoft and embraced by a broad coalition of cloud suppliers (part of CNFC) and open source projects. Dapr is a runtime framework that can support any application and that especially shines with distributed applications - for example microservices - that run in containers, spread over clouds and / or edge devices.
With Dapr you give an application a "sidecar" - a kind of personal assistant that takes care of all kinds of common responsibilities. Capturing and retrieving state, publishing and consuming messages or events. Reading secrets and configuration data. Shielding and load balancing over service endpoints. Calling and subscribing to all kinds of SaaS and PaaS facilities. Logging traces across all kinds of application components and logically routing calls between microservices and other application components. Dapr provides generic APIs to the application (HTTP and gRPC) for calling all these generic services – and provides implementations of these APIs for all public clouds and dozens of technology components. This means that your application can easily make use of a wide range of relevant features - with a strict separation between the language the application uses for this (generic, simple) and the configuration of the specific technology (e.g. Redis, MySQL, CosmosDB, Cassandra, PostgreSQL, Oracle Database, MongoDB, Azure SQL etc) that the Dapr sidecar uses. Changing technology does not affect the application, but affects the configuration of the Sidecar. Dapr can be used from applications in any technology - from Java and C#/.NET to Go, Python, Node, Rust and PHP. Or whatever can talk HTTP (or gRPC).
In this Code Café I will introduce you to Dapr.io. I will show you what Dapr can do for you (application) and how you can Dapr-izen an application. I'll show you how an asynchronously collaborative system of microservices - implemented in different technologies - can be easily connected to Dapr, first to Redis as a Pub/Sub mechanism and then also to Apache Kafka without modifications. Then we do - with the interested parties - also a hands-on in which you will apply Dapr yourself . In a short time you get a good feel for how you can use Dapr for different aspects of your applications. And if nothing else, Dapr is a very easy way to get your code with Kafka, S3, Redis, Azure EventGrid, HashiCorp Consul, Twillio, Pulsar, RabbitMQ, HashiCorp Vault, AWS Secret Manager, Azure KeyVault, Cron, SMTP, Twitter, AWS SQS & SNS, GCP Pub/Sub and dozens of other technology components talk.
How and Why you can and should Participate in Open Source Projects (AMIS, Sof...Lucas Jellema
For a long time I have been reluctant to actively contribute to an open source project. I thought it would be rather complicated and demanding – and that I didn't have the knowledge or skills for it or at the very least that they (the project team) weren't waiting for me.
In December 2021, I decided to have a serious input into the Dapr.io project – and now finally to determine how it works and whether it is really that complicated. In this session I want to tell you about my experiences. How Fork, Clone, Branch, Push (and PR) is the rhythm of contributing to an open source project and how you do that (these are all Git actions against GitHub repositories). How to learn how such a project functions and how to connect to it; which tools are needed, which communication channels are used. I tell how the standards of the project – largely automatically enforced – help me to become a better software engineer, with an eye for readability and testability of the code.
How the review process is quite exciting once you have offered your contribution. And how the final "merge to master" of my contribution and then the actual release (Dapr 1.6 contains my first contribution) are nice milestones.
I hope to motivate participants in this session to also take the step yourself and contribute to an open source project in the form of issues or samples, documentation or code. It's valuable to the community and the specific project and I think it's definitely a valuable experience for the "contributer". I looked up to it and now that I've done it gives me confidence – and it tastes like more (I could still use some help with the work on Dapr.io, by the way).
Microservices, Apache Kafka, Node, Dapr and more - Part Two (Fontys Hogeschoo...Lucas Jellema
Apache Kafka is one of the best known enterprise grade message brokers – created at LinkedIn, donated to the Apache software foundation and used in an ever growing number of organizations to provide a backbone for asynchronous communication. This session introduces Apache Kafka – history, concepts, community and tooling. In a hands on lab, participants will create topics, publish and consume messages and get a general feel for Kafka. Simple microservices are developed in NodeJS – publishing to and consuming from Apache Kafka.
Dapr.io has support for Apache Kafka. Using Kafka through Dapr is very straightforward as is explained and demonstrated and applied in a second handson lab – with applications in various programming languages. Participants will even be able to exchange events across their laptops – through a cloud based Kafka broker.
Use of Apache Kafka in several architecture patterns is discussed – such as data integration, microservices, CQRS, Event Sourcing – along with a number of real world use cases from several well known organizations. The Kafka Connector framework is introduced – a set of adapters that allow us to easily connect Kafka to sources and sinks – where respectively change events are captured from and messages are published to.
Bonus Lab: Apache Kafka is ran on Kubernetes as is Dapr.io. Multiple mutually interacting microservices are deployed on the same local Kubernetes cluster.
Microservices, Node, Dapr and more - Part One (Fontys Hogeschool, Spring 2022)Lucas Jellema
This session does a quick recap of microservices: why do we want them, what problems do they solve and what are the principles around designing and implementing them? The Dapr.io runtime framework for distributed applications is introduced. Dapr provides a sidecar (almost like a personal assistant to a manager) to an application or microservice, a companion process that handles common tasks such as storing and retrieving state, consuming and publishing messages and events, invoking external services and other microservices as well as handling incoming requests. Participants will do a handson lab with Dapr.io and learn how to quickly implement interactions with various technologies, including Redis and MySQL.
Node(JS) is introduced – a server side JavaScript-based programming language that can be used well for implementing microservices. Some of the main characteristics of NodeJS are discussed (functional programming, asynchronous flows, NPM package manager) as well as common use cases (handle incoming HTTP requests, invoke REST APIs). In the second lab, Node and Dapr are used together to implement microservices that interact with databases and message brokers and each other – in a decoupled fashion.
6Reinventing Oracle Systems in a Cloudy World (RMOUG Trainingdays, February 2...Lucas Jellema
The cloud is changing many things. Even the decision to not (yet) adopt cloud is one to make explicitly. Now is a time for any organization to reconsider the IT landscape. For each system we should make a conscious ruling on its roadmap. The 6R model suggests six ways to move a system forward.
This session uses the 6R model and applies it specifically to Oracle technology based systems: what are the options and considerations for Oracle Database, Oracle Fusion Middleware, custom applications, and other red components? What future should we consider and how do we choose? The paths chosen by several Oracle-heavy users is presented to illustrate these options and the decision making process. Oracle Cloud Infrastructure and Autonomous Database play a role, as do Azure IaaS and Azure Managed Database as well as on premises systems. Latency, recovery, scalability, licenses, automation, lock-in, skills, and resources all make their appearance.
Help me move away from Oracle! (RMOUG Training Days 2022, February 2022)Lucas Jellema
Organizations with decades of investment in Oracle technology sometimes (and increasingly) express a wish to move away from Oracle. In this session, we will first explore where the desire to move away from Oracle might come from. Then we describe what the term Oracle represents -- more than 2.000 products on all layers in the technology stack and in different business areas. Finally, we map out what the 'moving away from' consists of: defining where you 'move to' and subsequently actually going there.
It will become clear why you should give considerable thought about dropping Oracle, or any other vendors' technology, when you're not pleased with your current IT situation. You need to focus on the actual problems and objectives and define the suitable roadmap to fit your real needs. It turns out that the quest is usually for modernization and flexibility - and Oracle can very well be a part of that future.
DevOps is a term used in many places and unfortunately also to mean many different things. This presentation (largely in Dutch) paints the DevOps picture. While it may not give a clear cut definition (there does not seem to be one) it certainly makes clear what DevOps is about, what objectives and origins are and which factors enable and drive DevOps.
Conclusion Code Cafe - Microcks for Mocking and Testing Async APIs (January 2...Lucas Jellema
Microcks is a tool for API Mocking and Testing. In this presentation an overview of the support in Microcks for asynchronous APIs - the event publishing and consuming behavior of services and applications
Cloud native applications offer scalability, flexibility, and optimal use of compute resources. Serverless functions interacting through events, leveraging cloud capabilities for persistent storage and automated operations take organization to the next level in IT. This session demonstrates polyglot Functions interacting with native cloud services for events and persistence (Object Storage and NoSQL Database) and leveraging the Key and Secrets Vault, Monitoring and Notifications services for operational control. A lightweight API Gateway is used to expose APIs to external consumers. Infrastructure as Code is the guiding principle in deploying both cloud resources and application components, through OCI CLI and Terraform. This session leverages many cloud native (enabling) services in Oracle Cloud Infrastructure. The session will introduce concepts, then spend most of the time on live demonstrations. All sources are shared with the audience, to allow participants to create the same application in their own cloud tenancy. What is so great about Cloud Native Applications? How do you create one? I will explain the first and demonstrate the second. On Oracle Cloud Infrastructure, using services that anyone can use for free, I will live create a cloud native application that streams, persists, notifies, scales, monitors Benefits: - get to know many different OCI services - understand the meaning, purpose and benefits of cloud native development - learn how to take your own first steps in OCI - for free!
5. THE TOP-3 EARNING EMPLOYEES
• What can you say about the result of this query with
respect to the question: “Who are our top three
earning employees?”
A. Correct Answer
B. Sometimes correct
C. Correct if there are never duplicate
salaries
D. Not Correct
7. SPECIAL „BUSINESS RULE‟: DEFAULT VALUE
• The default values is the value that should be inserted
for a column when the client has ignored the column
– not provided a value nor indicated NULL
• The default value is applied prior to the execution of
the Before Row trigger
– So :new.<column_value> has the value that will be
inserted
– The Before Row trigger has no built in way to telling
whether the value was provided by the client or
supplied as default by the database
• Default value is typically used for auditing purposes
– Note: default values for columns exposed in UI should
be set in the client
8. COLUMN DEFAULT
• Columns can have default values
– Static or literals
– SQL expressions evaluating to a static
– Pseudo-columns like USER and CURRENT_DATE
• DO NOT USE SYSDATE! DO NOT USE USER!
– References to Application Context parameters
• sys_context(‘USERENV’, ‘IP_ADDRESS’)..
– Some funny value to let the before row trigger know
that the real (complex) default must be calculated
create table citizens
( name varchar2(100) default 'John Doe'
, birthdate date default current_date - 1
, city varchar2(50) default
sys_context('KANE_CTX', 'DEFAULT_CITY' )
, zipcode varchar2(8) default 'XYXYXYXYXQQ'
)
9. APPLICATION CONTEXT
• Memory area that enables application developers to
define, set, and access key/value pairs
Application
• Rapid access in SQL and PL/SQL Context
Attribute Value Attribute Value
Pairs
Attribute Value
select sys_context('USERENV', 'SESSION_USER')
from dual
l_user:= sys_context('USERENV', 'SESSION_USER')
• Two Application Contexts
are always around:
– CLIENTCONTEXT and USERENV
10. APPLICATION CONTEXT APPEARANCES
• Per session (default)
– Stored in UGA, just like package state
• Globally Accessible (shared across all sessions)
– Stored in SGA
• Associated with a Client Identifier
– Attributes in a Globally Accessible Application Context
can explicitly be tied to the Client Identifier
– And are only accessible to sessions with that Client
Identifier
11. TYPICAL WEB ARCHITECTURE USING
CONNECTION POOL
JDBC Connection Pool
Session 1 Session 2 Session 3 Session 4
Package A Package B Package C
globals
12. PACKAGE STATE IS TIED TO DATABASE
SESSION
JDBC Connection Pool
Session 1 Session 2 Session 3 Session 4
globals
Package A Package B Package C
globals
13. PACKAGE STATE IS TIED TO DATABASE
SESSION – NOT WEB SESSION
JDBC Connection Pool
Session 1 Session 2 Session 3 Session 4
globals
Package A Package B Package C
globals
14. APPLICATION CONTEXT TO RETAIN
STATE FOR LIGHT WEIGHT END USERS
JDBC Connection Pool
Session 1 Session 2 Session 3 Session 4
globals ?
Package A Package B Package C
globals
15. APPLICATION CONTEXT TO RETAIN
STATE FOR LIGHT WEIGHT END USERS
JDBC Connection Pool
Session 1 Session 2 Session 3 Session 4
USERENV USERENV
Package A Global Context Package C
globals globals
globals
16. APPLICATION CONTEXT TO RETAIN
STATE FOR LIGHT WEIGHT END USERS
JDBC Connection Pool
Session 1 Session 2 Session 3 Session 4
USERENV USERENV USERENV
Package A Global Context Package C
globals globals
globals
17. PACKAGE GLOBALS: THE STATE OF THE
PACKAGE IN A SESSION
• This state is lost when the package is recompiled
– That is undesirable in a highly available environment
Package
18. PACKAGE GLOBALS CAN BE REPLACED BY
APPLICATION CONTEXT
• The Application Context is untouched by
recompilation of the package
– All ‘globals’ in the application context retain their values
Package
Application Context
19. EBR TO KILL PLANNED DOWNTIME
(BECAUSE OF APPLICATION UPGRADE)
Application
Application X
X
VERSION 1
VERSION 2
21. FLASHBACK
• Introduced in 9i
• Based on UNDO
• Initially only for recovery
• As of 11g – Total Recall option with
Flashback Data Archive
– Controlled history keeping
• Look back into history
– Query trends (version history)
– Difference reporting
– Audit trails (Replace journaling tables)
• Require trick for transaction history: WHO?
• Also: when is the start of history?
22. DATABASE IN MODERN ARCHITECTURE
Mobile WS
Business Tier Cache/Grid
Enterprise (L1, L2, L3)
Service Bus
Services
Standard
Application
Database s
Database Legacy
Application
s
23. MULTI TIER ARCHITECTURE
Mobile WS
Business Tier Cache/Grid
Enterprise HTTP REST JDBC (L1, L2, L3)
Service Bus HTTP SOAP JPA (H/EL)
FTP/WEBDAV
Services DB QRCN
HTTP
JMX, JMX
Monitor, Tra
ce, Audit Stored Encapsulation
Database Procedures Decoupling
Authentication & Caching
Fine Grained Business Logic
Authorization
24. APPLICATION ARCHITECTURE:
DRIVE APPLICATION FROM META DATA
• Agility
• Design Time at Run Time
• Define part of the application behavior and appearance
through meta-data (outside the base source code)
– The default settings are defined by developers and
deployed along with the application
– Read and interpreted at run time
– Manipulated and re-read
and re-interpreted at run time Application
• Note: very similar to the way
the database operates:
– Data Dictionary is the
meta-data driving the
behavior of the database meta
25. SEPARATE
BASE DATA AND CUSTOMIZED DATA
• If a value is changed during site-level implementation
– Or run time customization
• It should be kept apart from the base „meta-data‟
– To prevent overwriting customized data when the new
release arrives
– To allow for (temporarily) reverting to base data
• A simple solution: the Complex View with two
underlying tables approach
– Note: Select…
For Update Of
is not allowed ORIGINAL_NAME
IO trg
Customized
Values
New release BaseValues
26. REPLACE THE ORIGINAL SINGLE TABLE
WITH A TWO-TABLE BASE/CUSTOM SPLIT
• rename <original> to <base>
• create table <customizations>
as
select * from base where rownum = 0
• create or replace view <original>
as
select * from <customizations>
union all
select * from <base> b
left outer join
<customizations> c
on (b.id = c.id)
where c.rowid is null
27. REPLACE THE ORIGINAL SINGLE TABLE
WITH A TWO-TABLE BASE/CUSTOM SPLIT (2)
• create or replace trigger handle_insert_trg
instead of insert on original
for each row
begin
insert into <customizations> (col, col2,…)
values(:new.col, :new.col2,…);
end;
• create or replace trigger handle_update_trg
instead of update on original
for each row
begin
update <customizations>
set col = :new.col, …
where id = :new.id ;
if sql%rowcount = 0
then
insert into <customizations> (id, col, col2,…)
(select id, :new.col, :new.col2 from base where id = :new.id);
end if;
end;
28. APPLICATION ARCHITECTURE: NO SQL
• NO SQL
– Complex SQL is
hidden away inside
the database
– Cache to not have Web Browser
to query all the time
from the database
– … and to not take
the overhead of a
commit for not so
important data JEE Application Server
NO SQL
– Process first – in
memory, on
middle tier
(BigData and CEP) -
and only persist
what is useful RDBMS
SQL
29. QUERY RESULT CHANGE NOTIFICATION
• Continuous Query Notification:
– Send an event when the result set for a query changes
– Background process calls PL/SQL Handler or Java
Listener or OCI client when the
Java
commit has occurred Listener
– Event contains rowid
of changed rows
• Used for:
– Refreshing specific
data caches (middle
tier, global context)
– (custom) Replication
PL/SQL
30. CONTINUOUS PROCESSING OF DATA
STREAMS USING CQL
• Aggregation, Spot deviation, Match on complex
patterns
31. WHO IS AFRAID OF RED, YELLOW AND BLUE
• Table Events
– Column Seq number(5)
– Column Payload varchar2(200)
32. SOLUTION USING LEAD
• With LEAD it is easy to compare a row with its
successor(s)
– As long as the pattern is fixed, LEAD will suffice
with look_ahead_events as
( SELECT e.*
, lead(payload) over (order by seq) next_color
, lead(payload,2) over (order by seq) second_next_color
FROM events e
)
select seq
from look_ahead_events
where payload ='red'
and next_color ='yellow'
and second_next_color='blue'
35. GET THIS WEEK‟S GROCERIES
getGroceries Item[] ( String[] shoppingList) {
Item[] items = new Item[ shoppingList.length];
for (int i=0; i < shoppingList.length; i++) {
items[i] = shopForItem (shoppingList[i]);
}
return items;
}
36. PENSION FUND – SEPTEMBER 2012
Employer < >
Participants
Job & Benefits
37. FETCHING THE DATA OF THE PENSION
FUND FOR THE WEB APPLICATION
select *
1 record
< >
from employers
where id = < 324>
select * 100s records
from participants
where employer_id = < 324>
select * 10s records
from benefits
where participant_id = <#>
38. REPORTING ON MANY EMPLOYERS
select *
100s records
from employers 1 query
select * 10k records
from participants 100s queries
where employer_id = <#>
select * 100k records
from benefits 10k queries
where participant_id = <#>
39. APPLICATION ARCHITECTURE –
BULK RETRIEVE
• Have the database bulk up the data retrieval
• Return Ref Cursor, Types and Collections or
JSON/XML
Benefits Package
select *
from employers
where id in <some set> select *
from participants
where employer_id in <some set>
select b.*
from benefits b join participants p
on (p.id = b.participant_id)
where p.employer_id in <some set>
40. APPLICATION ARCHITECTURE –
SERVICE ENABLING
WebLogic Server Database
Native DB
WebService
HTTP EPG
Java/JEE
PL/SQL
package
SOA Suite JDBC
AQ
View
Oracle Other Table
Service Bus (Email, FTP/File, XML
DB
XMPP/Chat)
Chat/IM XMPP
Server File/FTP Server
Email Server
41. XML/JSON
Relational/Oracle Type
JEE Server Database
11g Native DB
WebService
HTTP
10g
http
ADF BC
EPG
JSON/ CSV
Java App SDO /SDO WS XML PL/SQL
XML & XSD
WS
JAX-WS
package
Ref Cursor
JPublisher
DB Types & Coll
SOA WS JDBC XML
8i Types
Suite AQ
Adapters
AQ
JMS Queue JMS
utl_file, View
JMS
BFILE,
Oracle EJB
EJB/JPA URITYPE
Service Pojo Other Table
(Email, FTP/File, 9i XML
Bus File
XMPP/Chat) DB
FTP UMS
Chat/IM XMPP
Server
File/FTP Server
Email Server
42. THE TALKING DATABASE
Details on the Employee. Employee
name is Smith, his job is Analyst. He
works in department 20…
EMP
44. BUSINESS RULES
• Data Oriented Rules or Data Constraints
• Declarative support in database
– For referential integrity
• Order must be for a Customer
– For attribute and tuple rules
• Salary must be numeric,
• Hiredate may not be in the future,
• End date must come after begin date
• No declarative support for complex data rules – across
multiple records and tables
– A department in France may not have less then 20%
female employees
– Order items of type weapon may not be part of an
order that ships around Christmas
45. BUSINESS RULES –
WHERE AND HOW TO IMPLEMENT
• Criteria:
– Safe
– Well performant
– Reusable and maintainable
– Productive to implement
• Options
– Client side
• JavaScript
– Middle-tier
• Java, Enterprise Service Bus
– Database
• Constraints and triggers are statement level – i/o
transaction level
46. RDBMS NOT ALWAYS EXCLUSIVELY
ACCESSED THROUGH ONE LAYER
SOA, ESB,
WebServices
Batch Bulk
Processes Standard
Applications
Database
Data Replication & Legacy
Synchronization Applications
47. 11G VIRTUAL COLUMNS
• Add columns to a table based on an
expression
– Using ‘real’ columns, SQL Function and User Defined
Functions
– No data is stored for Virtual
Columns, only meta-data
VIRTUAL
– Virtual Columns can be
indexed
alter table emp
ADD
(income AS (sal + nvl(comm,0)))
48. UNIQUENESS RULES
USING VIRTUAL COLUMNS
• Business Rule:
– Not more than one manager per department
alter table emp
add constraint only_one_mgr_in_dept_uk
unique (one_mgr_flag)
alter table emp
ADD
( one_mgr_flag as
( case when job ='MANAGER'
then deptno
end
)
)
49. CHALLENGE: ORDERS BELONG TO A
CUSTOMER IN ONE OF TWO TABLES
• The Orders table contains Order records for
customers – either Dutch or Australian customers
• These customers are stored in two different tables
• Can we implement referential integrity to ensure that
the order‟s customer exists?
OZ_CUSTOMER
ORDER
?
Id
Name
Country DUTCH_CUSTOMER
Customer_Id
…. Id
Name
50. USING VIRTUAL COLUMNS
IN FOREIGN KEY RELATIONS
• A foreign key can be created on a Virtual Column
– That means for example we can have a single column
with some id
– And two virtual columns with CASE expressions that
produce NULL or the ID value
– With Foreign Keys on the Virtual Columns
OZ_CUSTOMER
ORDER
Id
Country Name
DUTCH_CUSTOMER
Customer_Id
Dutch_id (VC) Id
Australian_id (VC) Name
51. USING VIRTUAL COLUMNS
IN FOREIGN KEY RELATIONS
alter table orders alter table orders
add (australian_ctr_id as add constraint odr_ocr_fk
(case country foreign key (australian_ctr_id)
when 'OZ' references oz_customer (id)
then customer_id
end))
OZ_CUSTOMER
ORDER
Id
Country Name
DUTCH_CUSTOMER
Customer_Id
Dutch_id (VC) Id
Australian_id (VC) Name
alter table orders
add (dutch_ctr_id as alter table orders
(case country add constraint odr_dcr_fk
when 'NL' foreign key (dutch_ctr_id)
then customer_id references dutch_customer
end)) (id)
52. FOREIGN KEY SHOULD ONLY REFER TO
CERTAIN RECORDS USING VC
• Foreign Key can reference a UK based on a Virtual
Column
• That allows a „conditional foreign key‟ or a foreign key
that can only reference specific records in the
referenced table
– Only refer to Women in the PEOPLE table for the
Mother Foreign Key
– Only refer to Values in the Domain Values table for the
Domain Name == ‘COLORS’
53. RESTRICTED FOREIGN KEYS USING
VIRTUAL COLUMNS
alter table domain_values
alter (country_value as
add table domain_values
(case domain_name
add (country_value as
alter table domain_values when 'COUNTRIES'
(case domain_name
then domain_value
when 'COUNTRIES'
add (color_value as end))
then domain_value
(case domain_name end))
when 'COLORS'
then domain_value DOMAIN_VALUES
end))
CARS Id
Domain_Name
Domain_Value
ID
Color_Value
Make Gender_Value
Type OrderStatus_Value
Color Country_Value
Year ShipmentMethod_Value
alter table cars
add constraint car_clr_fk
foreign key (color)
references domain_values
(color_value)
55. VALIDATION
• Statement time validation means:
DML in different session
DML More DML Commit
validation validation
• To prevent leakage we should validate at commit time
– Logically correct as transaction is the logical unit
– Effects from other sessions between statement and
commit are taken into account
• However: Oracle unfortunately does not provide us with
a pre-commit or on-commit trigger
• Workarounds:
– Dummy Table with Materialized View On Commit Refresh
and Trigger on Materialized View
– Do a soft-commit by calling a package to do the actual
commit – that will first do transaction level checks
• Supported by a deferred check constraint that is violated by
each operation that potentially violates a business rule
56. SAFE SOLUTION: USE CUSTOM LOCKS
• Prior to validating a certain business rule for a specific
record – acquire a custom lock
– That identifies both Rule and Record
– Using dbms_lock
DML in different session
DML More DML Commit
validation validation
• When a record is being validated for a certain rule,
other sessions have to wait
• The commit (or rollback) releases all locks
• Validation in a different session will include all
committed data
57. SUMMARY
• Inline Views
• Defaulting
• Application Context
• Flashback and the time dimension
• NoSQL means smart SQL
– Cache refresh driven by change notification
– Streaming analysis before persisting
• Decoupling galore
– Bulk retrieval
– Service enabling
• Business Rules
• EBR
• 12c promises even more