Apache Kafka is a popular distributed streaming data platform. A Kafka cluster stores streams of records (messages) in categories called topics. It is the architectural backbone for integrating streaming data with a Data Lake, Microservices and Stream Processing. Data sources flowing into Kafka are often native data streams such as social media streams, telemetry data, financial transactions and many others. But these data stream only contain part of the information. A lot of data necessary in stream processing is stored in traditional systems backed by relational databases. To implement new and modern, real-time solutions, an up-to-date view of that information is needed. So how do we make sure that information can flow between the RDBMS and Kafka, so that changes are available in Kafka as soon as possible in near-real-time? This session will present different approaches for integrating relational databases with Kafka, such as Kafka Connect, Oracle GoldenGate and bridging Kafka with Oracle Advanced Queuing (AQ).
Solutions for bi-directional integration between Oracle RDBMS and Apache KafkaGuido Schmutz
Apache Kafka is a popular distributed streaming data platform. A Kafka cluster stores streams of records (messages) in categories called topics. It is the architectural backbone for integrating streaming data with a Data Lake, Microservices and Stream Processing. Data sources flowing into Kafka are often native data streams such as social media streams, telemetry data, financial transactions and many others. But these data stream only contain part of the information. A lot of data necessary in stream processing is stored in traditional systems backed by relational databases. To implement new and modern, real-time solutions, an up-to-date view of that information is needed. So how do we make sure that information can flow between the RDBMS and Kafka, so that changes are available in Kafka as soon as possible in near-real-time? This session will present different approaches for integrating relational databases with Kafka, such as Kafka Connect, Oracle GoldenGate and bridging Kafka with Oracle Advanced Queuing (AQ).
Independent of the source of data, the integration of event streams into an Enterprise Architecture gets more and more important in the world of sensors, social media streams and Internet of Things. Events have to be accepted quickly and reliably, they have to be distributed and analysed, often with many consumers or systems interested in all or part of the events. Storing such huge event streams into HDFS or a NoSQL datastore is feasible and not such a challenge anymore. But if you want to be able to react fast, with minimal latency, you can not afford to first store the data and doing the analysis/analytics later. You have to be able to include part of your analytics right after you consume the data streams. Products for doing event processing, such as Oracle Event Processing or Esper, are avaialble for quite a long time and used to be called Complex Event Processing (CEP). In the past few years, another family of products appeared, mostly out of the Big Data Technology space, called Stream Processing or Streaming Analytics. These are mostly open source products/frameworks such as Apache Storm, Spark Streaming, Flink, Kafka Streams as well as supporting infrastructures such as Apache Kafka. In this talk I will present the theoretical foundations for Stream Processing, discuss the core properties a Stream Processing platform should provide and highlight what differences you might find between the more traditional CEP and the more modern Stream Processing solutions.
Most data visualization solutions today still work on data sources which are stored persistently in a data store, using the so called “data at rest” paradigms. More and more data sources today provide a constant stream of data, from IoT devices to Social Media streams. These data stream publish with high velocity and messages often have to be processed as quick as possible. For the processing and analytics on the data, so called stream processing solutions are available. But these only provide minimal or no visualization capabilities. One option is to first persist the data into a data store and then use a traditional data visualization solution to present the data. If latency is not an issue, such a solution might be good enough. An other question is which data store solution is necessary to keep up with the high load on write and read. If it is not an RDBMS but an NoSQL database, then not all traditional visualization tools might already integrate with the specific data store. An other option is to use a Streaming Visualization solution. This talk presents different architecture blueprints for integrating data visualization into a fast data solutions.
Kafka as an event store - is it good enough?Guido Schmutz
Event Sourcing and CQRS are two popular patterns for implementing a Microservices architectures. With Event Sourcing we do not store the state of an object, but instead store all the events impacting its state. Then to retrieve an object state, we have to read the different events related to a certain object and apply them one by one. CQRS (Command Query Responsibility Segregation) on the other hand is a way to dissociate writes (Command) and reads (Query). Event Sourcing and CQRS are frequently grouped and used together to form something bigger. While it is possible to implement CQRS without Event Sourcing, the opposite is not necessarily correct. In order to implement Event Sourcing, an efficient Event Store is needed. But is that also true when combining Event Sourcing and CQRS? And what is an event store in the first place and what features should it implement?
This presentation will first discuss what functionalities an event store should offer and then present how Apache Kafka can be used to implement an event store. But is Kafka good enough or do specific event store solutions such as AxonDB or Event Store provide a better solution?
Spark (Structured) Streaming vs. Kafka StreamsGuido Schmutz
Independent of the source of data, the integration and analysis of event streams gets more important in the world of sensors, social media streams and Internet of Things. Events have to be accepted quickly and reliably, they have to be distributed and analyzed, often with many consumers or systems interested in all or part of the events. In this session we compare two popular Streaming Analytics solutions: Spark Streaming and Kafka Streams.
Spark is fast and general engine for large-scale data processing and has been designed to provide a more efficient alternative to Hadoop MapReduce. Spark Streaming brings Spark's language-integrated API to stream processing, letting you write streaming applications the same way you write batch jobs. It supports both Java and Scala.
Kafka Streams is the stream processing solution which is part of Kafka. It is provided as a Java library and by that can be easily integrated with any Java application.
This presentation shows how you can implement stream processing solutions with each of the two frameworks, discusses how they compare and highlights the differences and similarities.
Solutions for bi-directional integration between Oracle RDBMS & Apache KafkaGuido Schmutz
Apache Kafka is a popular distributed streaming data platform and more and more is the architectural backbone for integrating streaming data with a Data Lake, Microservices and Stream Processing. A lot of data necessary in stream processing is stored in traditional systems backed by relational databases. This session will present different approaches for integrating relational databases with Kafka, such as Kafka Connect, Oracle GoldenGate, ORDS APIs and bridging Kafka with Oracle AQ.
More and more data sources today provide a constant data stream, from Internet of Things devices to Social Media streams. It is one thing to collect these events in the velocity they arrive, without losing any single message. An Event Hub and a data flow engine can help here. It’s another thing to do some (complex) analytics on the data. There is always the option to first store them in a data sink of choice, such as a data lake implemented with HDFS/object store, or in a database such as a NoSQL or even an RDBMS, if the volume of events is not too high. Storing a high-volume event stream is feasible and not such a challenge anymore. But doing it adds to the end-to-end latency and it’s a matter of minutes or hours until you can present some results of your analytics. If you need to react fast, you simply can't afford to first store the data and doing the analysis/analytics later. You have to be able to include part of your analytics directly on the data stream. This is called Stream Processing or Stream Analytics. In this talk I will present the important concepts, a Stream Processing solution should support and then dive into some of the most popular frameworks available on the market and how they compare.
Solutions for bi-directional integration between Oracle RDBMS and Apache KafkaGuido Schmutz
Apache Kafka is a popular distributed streaming data platform. A Kafka cluster stores streams of records (messages) in categories called topics. It is the architectural backbone for integrating streaming data with a Data Lake, Microservices and Stream Processing. Data sources flowing into Kafka are often native data streams such as social media streams, telemetry data, financial transactions and many others. But these data stream only contain part of the information. A lot of data necessary in stream processing is stored in traditional systems backed by relational databases. To implement new and modern, real-time solutions, an up-to-date view of that information is needed. So how do we make sure that information can flow between the RDBMS and Kafka, so that changes are available in Kafka as soon as possible in near-real-time? This session will present different approaches for integrating relational databases with Kafka, such as Kafka Connect, Oracle GoldenGate and bridging Kafka with Oracle Advanced Queuing (AQ).
Independent of the source of data, the integration of event streams into an Enterprise Architecture gets more and more important in the world of sensors, social media streams and Internet of Things. Events have to be accepted quickly and reliably, they have to be distributed and analysed, often with many consumers or systems interested in all or part of the events. Storing such huge event streams into HDFS or a NoSQL datastore is feasible and not such a challenge anymore. But if you want to be able to react fast, with minimal latency, you can not afford to first store the data and doing the analysis/analytics later. You have to be able to include part of your analytics right after you consume the data streams. Products for doing event processing, such as Oracle Event Processing or Esper, are avaialble for quite a long time and used to be called Complex Event Processing (CEP). In the past few years, another family of products appeared, mostly out of the Big Data Technology space, called Stream Processing or Streaming Analytics. These are mostly open source products/frameworks such as Apache Storm, Spark Streaming, Flink, Kafka Streams as well as supporting infrastructures such as Apache Kafka. In this talk I will present the theoretical foundations for Stream Processing, discuss the core properties a Stream Processing platform should provide and highlight what differences you might find between the more traditional CEP and the more modern Stream Processing solutions.
Most data visualization solutions today still work on data sources which are stored persistently in a data store, using the so called “data at rest” paradigms. More and more data sources today provide a constant stream of data, from IoT devices to Social Media streams. These data stream publish with high velocity and messages often have to be processed as quick as possible. For the processing and analytics on the data, so called stream processing solutions are available. But these only provide minimal or no visualization capabilities. One option is to first persist the data into a data store and then use a traditional data visualization solution to present the data. If latency is not an issue, such a solution might be good enough. An other question is which data store solution is necessary to keep up with the high load on write and read. If it is not an RDBMS but an NoSQL database, then not all traditional visualization tools might already integrate with the specific data store. An other option is to use a Streaming Visualization solution. This talk presents different architecture blueprints for integrating data visualization into a fast data solutions.
Kafka as an event store - is it good enough?Guido Schmutz
Event Sourcing and CQRS are two popular patterns for implementing a Microservices architectures. With Event Sourcing we do not store the state of an object, but instead store all the events impacting its state. Then to retrieve an object state, we have to read the different events related to a certain object and apply them one by one. CQRS (Command Query Responsibility Segregation) on the other hand is a way to dissociate writes (Command) and reads (Query). Event Sourcing and CQRS are frequently grouped and used together to form something bigger. While it is possible to implement CQRS without Event Sourcing, the opposite is not necessarily correct. In order to implement Event Sourcing, an efficient Event Store is needed. But is that also true when combining Event Sourcing and CQRS? And what is an event store in the first place and what features should it implement?
This presentation will first discuss what functionalities an event store should offer and then present how Apache Kafka can be used to implement an event store. But is Kafka good enough or do specific event store solutions such as AxonDB or Event Store provide a better solution?
Spark (Structured) Streaming vs. Kafka StreamsGuido Schmutz
Independent of the source of data, the integration and analysis of event streams gets more important in the world of sensors, social media streams and Internet of Things. Events have to be accepted quickly and reliably, they have to be distributed and analyzed, often with many consumers or systems interested in all or part of the events. In this session we compare two popular Streaming Analytics solutions: Spark Streaming and Kafka Streams.
Spark is fast and general engine for large-scale data processing and has been designed to provide a more efficient alternative to Hadoop MapReduce. Spark Streaming brings Spark's language-integrated API to stream processing, letting you write streaming applications the same way you write batch jobs. It supports both Java and Scala.
Kafka Streams is the stream processing solution which is part of Kafka. It is provided as a Java library and by that can be easily integrated with any Java application.
This presentation shows how you can implement stream processing solutions with each of the two frameworks, discusses how they compare and highlights the differences and similarities.
Solutions for bi-directional integration between Oracle RDBMS & Apache KafkaGuido Schmutz
Apache Kafka is a popular distributed streaming data platform and more and more is the architectural backbone for integrating streaming data with a Data Lake, Microservices and Stream Processing. A lot of data necessary in stream processing is stored in traditional systems backed by relational databases. This session will present different approaches for integrating relational databases with Kafka, such as Kafka Connect, Oracle GoldenGate, ORDS APIs and bridging Kafka with Oracle AQ.
More and more data sources today provide a constant data stream, from Internet of Things devices to Social Media streams. It is one thing to collect these events in the velocity they arrive, without losing any single message. An Event Hub and a data flow engine can help here. It’s another thing to do some (complex) analytics on the data. There is always the option to first store them in a data sink of choice, such as a data lake implemented with HDFS/object store, or in a database such as a NoSQL or even an RDBMS, if the volume of events is not too high. Storing a high-volume event stream is feasible and not such a challenge anymore. But doing it adds to the end-to-end latency and it’s a matter of minutes or hours until you can present some results of your analytics. If you need to react fast, you simply can't afford to first store the data and doing the analysis/analytics later. You have to be able to include part of your analytics directly on the data stream. This is called Stream Processing or Stream Analytics. In this talk I will present the important concepts, a Stream Processing solution should support and then dive into some of the most popular frameworks available on the market and how they compare.
Ingesting and Processing IoT Data - using MQTT, Kafka Connect and KSQLGuido Schmutz
Internet of Things use cases are a perfect match for processing with a streaming platform such as Kafka and the Confluent Platform. Some of the questions to be answered are: How do we feed the data from our devices into Kafka? Do we directly send data to Kafka? Is Kafka accessible from outside the organization over the internet? What if we want to use a more specific IoT protocol such as MQTT or CoAP in between? How would we integrate it with Kafka? How can we enrich IoT streaming data with static data sitting in a traditional system?
This session will provide answers to these and other questions using a fictitious use case of a trucking company. Trucks are constantly sending data about position and driving habits, which can be used to derive real-time information and actions. A large part of the presentation will be a live demo. The demo will show the implementation of the pipeline incrementally: starting with sending the truck movement events directly to Kafka, then adding MQTT to the sensor data ingestion, followed by using Kafka Streams and KSQL to apply stream processing on the information received. The final pipeline will demonstrate the application of Kafka Connect with MQTT and JDBC source connectors for data ingestion and event stream enrichment, and Kafka Streams and KSQL for stream processing. The key takeaway is the live demonstration of a working end-to-end IoT streaming data ingestion pipeline using Kafka technologies.
Event Hub (i.e. Kafka) in Modern Data ArchitectureGuido Schmutz
Today's modern data architectures and the their implementations contain an Event Hub. What are the benefits of placing an Event Hub in a Modern Data (Analytics) Architecture? What exactly is an Event Hub and what capabilities should it provide? Why is Apache Kafka the most popular realization of an Event Hub?
These and many other questions will be answered in this session. The talk will start with a vendor-neutral definition of the capabilities of an Event Hub.
Then the session will highlight the different architecture styles which can be supported using an Event Hub (Kafka), such as Streaming Data Integration, Stream Analytics and Decoupled Event-Driven Applications and how can these be combined into a unified architecture, making the Event Hub the central nervous system of an enterprise architecture. We will end with an overview of the Kafka ecosystem and a placement of the various components onto the Modern Data (Analytics) Architecture.
Most data visualisation solutions today still work on data sources which are stored persistently in a data store, using the so called “data at rest” paradigms. More and more data sources today provide a constant stream of data, from IoT devices to Social Media streams. These data stream publish with high velocity and messages often have to be processed as quick as possible. For the processing and analytics on the data, so called stream processing solutions are available. But these only provide minimal or no visualisation capabilities. One option is to first persist the data into a data store and then use a traditional data visualisation solution to present the data. If latency is not an issue, such a solution might be good enough. An other question is which data store solution is necessary to keep up with the high load on write and read. If it is not an RDBMS but an NoSQL database, then not all traditional visualisation tools might already integrate with the specific data store. An other option is to use a Streaming Visualisation solution. They are specially built for streaming data and often do not support batch data. A much better solution would be to have one tool capable of handling both, batch and streaming data. This talk presents different architecture blueprints for integrating data visualisation into a fast data solution and then we show how the different blueprints can be implemented by mapping products onto the blueprints.
Solutions for bi-directional Integration between Oracle RDMBS & Apache KafkaGuido Schmutz
Apache Kafka is a popular distributed streaming data platform. A Kafka cluster stores streams of records (messages) in categories called topics. It is the architectural backbone of modern data analytics. Data flowing into Kafka often originates from native data streams such as social media streams, telemetry data, financial transactions and many others. But these data streams only contain part of the information. A lot of data necessary in stream processing is stored in traditional systems backed by relational databases. To implement new and modern, real-time solutions, an up-to-date view of that information is needed. So how do we make sure that information can flow between the RDMBS and Kafka, so that changes are available in Kafka as soon as possible in near-real-time? It this session, we present different approaches for integrating relational databases with Kafka, such as Kafka Connect, Oracle GoldenGate and bridging Kafka with Oracle Advanced Queuing (AQ).
Building event-driven (Micro)Services with Apache Kafka Guido Schmutz
What is a Microservices architecture and how does it differ from a Service-Oriented Architecture? Should you use traditional REST APIs to bind services together? Or is it better to use a richer, more loosely-coupled protocol? This talk will start with quick recap of how we created systems over the past 20 years and how different architectures evolved from it. The talk will show how we piece services together in event driven systems, how we use a distributed log (event hub) to create a central, persistent history of events and what benefits we achieve from doing so. Apache Kafka is a perfect match for building such an asynchronous, loosely-coupled event-driven backbone. Events trigger processing logic, which can be implemented in a more traditional as well as in a stream processing fashion. The talk will show the difference between a request-driven and event-driven communication and show when to use which. It highlights how the modern stream processing systems can be used to hold state both internally as well as in a database and how this state can be used to further increase independence of other services, the primary goal of a Microservices architecture.
Building Event Driven (Micro)services with Apache KafkaGuido Schmutz
What is a Microservices architecture and how does it differ from a Service-Oriented Architecture? Should you use traditional REST APIs to bind services together? Or is it better to use a richer, more loosely-coupled protocol? This talk will start with quick recap of how we created systems over the past 20 years and how different architectures evolved from it. The talk will show how we piece services together in event driven systems, how we use a distributed log (event hub) to create a central, persistent history of events and what benefits we achieve from doing so.
Apache Kafka is a perfect match for building such an asynchronous, loosely-coupled event-driven backbone. Events trigger processing logic, which can be implemented in a more traditional as well as in a stream processing fashion. The talk will show the difference between a request-driven and event-driven communication and show when to use which. It highlights how the modern stream processing systems can be used to hold state both internally as well as in a database and how this state can be used to further increase independence of other services, the primary goal of a Microservices architecture.
Building Event Driven (Micro)services with Apache KafkaGuido Schmutz
What is a Microservices architecture and how does it differ from a Service-Oriented Architecture? Should you use traditional REST APIs to bind services together? Or is it better to use a richer, more loosely-coupled protocol? This talk will start with quick recap of how we created systems over the past 20 years and how different architectures evolved from it. The talk will show how we piece services together in event driven systems, how we use a distributed log (event hub) to create a central, persistent history of events and what benefits we achieve from doing so.
Apache Kafka is a perfect match for building such an asynchronous, loosely-coupled event-driven backbone. Events trigger processing logic, which can be implemented in a more traditional as well as in a stream processing fashion. The talk will show the difference between a request-driven and event-driven communication and show when to use which. It highlights how the modern stream processing systems can be used to hold state both internally as well as in a database and how this state can be used to further increase independence of other services, the primary goal of a Microservices architecture.
Big Data, Data Lake, Fast Data - Dataserialiation-FormatsGuido Schmutz
The concept of "Data Lake" is in everyone's mind today. The idea of storing all the data that accumulates in a company in a central location and making it available sounds very interesting at first. But Data Lake can quickly turn from a clear, beautiful mountain lake into a huge pond, especially if it is inexpertly entrusted with all the source data formats that are common in today's enterprises, such as XML, JSON, CSV or unstructured text data. Who, after some time, still has an overview of which data, which format and how they have developed over different versions? Anyone who wants to help themselves from the Data Lake must ask themselves the same questions over and over again: what information is provided, what data types do they have and how has the content changed over time?
Data serialization frameworks such as Apache Avro and Google Protocol Buffer (Protobuf), which enable platform-independent data modeling and data storage, can help. This talk will discuss the possibilities of Avro and Protobuf and show how they can be used in the context of a data lake and what advantages can be achieved. The support on Avro and Protobuf by Big Data and Fast Data platforms is also a topic.
Solutions for bi-directional Integration between Oracle RDMBS & Apache KafkaGuido Schmutz
A Kafka cluster stores streams of records (messages) in categories called topics. It is the architectural backbone for integrating streaming data with a Data Lake, Microservices and Stream Processing. Today’s enterprises have their core systems often implemented on top of relational databases, such as the Oracle RDBMS. Implementing a new solution supporting the digital strategy using Kafka and the ecosystem can not always be done completely separate from the traditional legacy solutions. Often streaming data has to be enriched with state data which is held in an RDBMS of a legacy application. It’s important to cache this data in the stream processing solution, so that It can be efficiently joined to the data stream. But how do we make sure that the cache is kept up-to-date, if the source data changes? We can either poll for changes from Kafka using Kafka Connect or let the RDBMS push the data changes to Kafka. But what about writing data back to the legacy application, i.e. an anomaly is detected inside the stream processing solution which should trigger an action inside the legacy application. Using Kafka Connect we can write to a database table or view, which could trigger the action. But this not always the best option. If you have an Oracle RDBMS, there are many other ways to integrate the database with Kafka, such as Advanced Queueing (message broker in the database), CDC through Golden Gate or Debezium, Oracle REST Database Service (ORDS) and more. In this session, we present various blueprints for integrating an Oracle RDBMS with Apache Kafka in both directions and discuss how these blueprints can be implemented using the products mentioned before.
Location Analytics - Real-Time Geofencing using Kafka Guido Schmutz
An important underlying concept behind location-based applications is called geofencing. Geofencing is a process that allows acting on users and/or devices who enter/exit a specific geographical area, known as a geo-fence. A geo-fence can be dynamically generated—as in a radius around a point location, or a geo-fence can be a predefined set of boundaries (such as secured areas, buildings, boarders of counties, states or countries). Geofencing lays the foundation for realising use cases around fleet monitoring, asset tracking, phone tracking across cell sites, connected manufacturing, ride-sharing solutions and many others. Many of the use cases mentioned above require low-latency actions taken place, if either a device enters or leaves a geo-fence or when it is approaching such a geo-fence. That’s where streaming data ingestion and streaming analytics and therefore the Kafka ecosystem comes into play. This session will present how location analytics applications can be implemented using Kafka and KSQL & Kafka Streams. It highlights the exiting features available out-of-the-box and then shows how easy it is to extend it by custom defined functions (UDFs).
Batch and streaming visualization in big data reference architecture, architecture blueprints for streaming visualization, implementations of the blueprints in a fast data solution.
Most data visualisation solutions today still work on data sources which are stored persistently in a data store, using the so called “data at rest” paradigms. More and more data sources today provide a constant stream of data, from IoT devices to Social Media streams. These data stream publish with high velocity and messages often have to be processed as quick as possible. For the processing and analytics on the data, so called stream processing solutions are available. But these only provide minimal or no visualisation capabilities. Therefore, one can use a dedicated Streaming Visualisation solution. They are specially built for streaming data and often do not support batch data. A much better solution would be to have one tool capable of handling both, batch and streaming data. This talk presents different architecture blueprints for integrating data visualisation into a fast data solution and highlights some of the products available to implement these blueprints.
Building event-driven (Micro)Services with Apache Kafka EcosystemGuido Schmutz
Should you use traditional REST APIs to bind services together? Or is it better to use a richer, more loosely-coupled protocol? This talk will dive into how we piece services together in event driven systems, how we use a distributed log (event hub) to create a central, persistent history of events and what benefits we achieve from doing so. Apache Kafka is a perfect match for building such an asynchronous, loosely-coupled event-driven backbone. Events trigger processing logic, which can be implemented in a more traditional as well as in a stream processing fashion. The talk will show the difference between a request-driven and event-driven communication and show when to use which. It highlights how the modern stream processing systems can be used to hold state both internally as well as in a database and how this state can be used to further increase independence of other services, the primary goal of a Microservices architecture.
Spark (Structured) Streaming vs. Kafka Streams - two stream processing platfo...Guido Schmutz
Independent of the source of data, the integration and analysis of event streams gets more important in the world of sensors, social media streams and Internet of Things. Events have to be accepted quickly and reliably, they have to be distributed and analyzed, often with many consumers or systems interested in all or part of the events. In this session we compare two popular Streaming Analytics solutions: Spark Streaming and Kafka Streams.
Spark is fast and general engine for large-scale data processing and has been designed to provide a more efficient alternative to Hadoop MapReduce. Spark Streaming brings Spark's language-integrated API to stream processing, letting you write streaming applications the same way you write batch jobs. It supports both Java and Scala.
Kafka Streams is the stream processing solution which is part of Kafka. It is provided as a Java library and by that can be easily integrated with any Java application.
Most data visualization solutions today still work on the “data at rest” paradigm, where data is persisted first and then analyzed. But data sources today often come as a constant stream of data, from IoT devices to Social Media streams. These data streams publish information with high velocity and messages often have to be processed as quick as possible. For the processing and analytics, so called stream processing solutions are available. But these only provide minimal or no visualization capabilities. So how do we solve the visualization of high velocity data streams? An option is to first persist the data and then use a traditional data visualization solution to present the data. If latency is not an issue, this might be good enough. A NoSQL database might be an ideal solution, but not all traditional visualization tools might easily integrate with the specific data store. Another option is to use a dedicated Streaming Visualization solution. They are specially built for streaming data but on the other hand often do not support batch data. This talk presents different architecture blueprints for visualizing fast data and shows some products for implementing these blueprints.
Most data visualisation solutions today still work on data sources which are stored persistently in a data store, using the so called “data at rest” paradigms. More and more data sources today provide a constant stream of data, from IoT devices to Social Media streams. These data stream publish with high velocity and messages often have to be processed as quick as possible. For the processing and analytics on the data, so called stream processing solutions are available. But these only provide minimal or no visualisation capabilities. One option is to first persist the data into a data store and then use a traditional data visualisation solution to present the data. If latency is not an issue, such a solution might be good enough. An other question is which data store solution is necessary to keep up with the high load on write and read. If it is not an RDBMS but an NoSQL database, then not all traditional visualisation tools might already integrate with the specific data store. An other option is to use a Streaming Visualisation solution. They are specially built for streaming data and often do not support batch data. A much better solution would be to have one tool capable of handling both, batch and streaming data. This talk presents different architecture blueprints for integrating data visualisation into a fast data solutions and then we show how the different blueprints can be implemented by mapping products onto the blueprints.
Kafka as your Data Lake - is it Feasible?Guido Schmutz
For a long time we discuss how much data we can keep in Kafka. Can we store data forever or do we remove data after a while and maybe having the history in a data lake on Object Storage or HDFS? With the advent of Tiered Storage in Confluent Enterprise Platform, storing data much longer in Kafka is much very feasible. So can we replace a traditional data lake with just Kafka? Maybe at least for the raw data? But what about accessing the data, for example using SQL?
KSQL allows for processing data in a streaming fashion using an SQL like dialect. But what about reading all data of a topic? You can reset the offset and still use KSQL. But there is another family of products, so-called query engines for Big Data. They originate from the idea of reading Big Data sources such as HDFS, object storage or HBase, using the SQL language. Presto, Apache Drill and Dremio are the most popular solutions in that space. Lately these query engines also added support for Kafka topics as a source of data. With that you can read a topic as a table and join it with information available in other data sources. The idea of course is not real-time streaming analytics but batch analytics directly on the Kafka topic, without having to store it in a big data storage.
This talk answers, how well these tools support Kafka as a data source. What serialization formats do they support? Is there some form of predicate push-down supported or do we have to always read the complete topic? How performant is a query against a topic, compared to a query against the same data sitting in HDFS or an object store? And finally, will this allow us to replace our data lake or at least part of it by Apache Kafka?
Event Broker (Kafka) in a Modern Data ArchitectureGuido Schmutz
Today's modern data architectures and the their implementations contain an Event Broker. What are the benefits of placing an Event Broker in a Modern Data (Analytics) Architecture? What exactly is an Event Broker and what capabilities should it provide? Why is Apache Kafka the most popular realisation of an Event Broker?
These and many other questions will be answered in this session. The talk will start with a vendor-neutral definition of the capabilities of an Event Broker.
Then the session will highlight the different architecture styles which can be supported using an Event Broker (Kafka), such as Streaming Data Integration, Stream Analytics and Decoupled Event-Driven Applications and how can these be combined into a unified architecture, making the Event Broker the central nervous system of an enterprise architecture. We will end with an overview of the Kafka ecosystem and a placement of the various components onto the Modern Data (Analytics) Architecture.
ksqlDB is a stream processing SQL engine, which allows stream processing on top of Apache Kafka. ksqlDB is based on Kafka Stream and provides capabilities for consuming messages from Kafka, analysing these messages in near-realtime with a SQL like language and produce results again to a Kafka topic. By that, no single line of Java code has to be written and you can reuse your SQL knowhow. This lowers the bar for starting with stream processing significantly.
ksqlDB offers powerful capabilities of stream processing, such as joins, aggregations, time windows and support for event time. In this talk I will present how KSQL integrates with the Kafka ecosystem and demonstrate how easy it is to implement a solution using ksqlDB for most part. This will be done in a live demo on a fictitious IoT sample.
Building Event-Driven (Micro)Services with Apache KafkaGuido Schmutz
Should we use traditional REST APIs to bind services together? Or is it better to use a more loosely-coupled protocol? This talk will dive into how we piece services together in event driven systems, how we use a distributed log (event hub) to create a central, persistent history of events and what benefits we achieve from doing so. Apache Kafka is a perfect match for building an asynchronous, loosely-coupled event-driven backbone. Events trigger processing logic, which can be implemented in a traditional as well as in a stream processing fashion. The talk will show the difference between a request-driven and event-driven communication and show when to use which.
Solutions for bi-directional integration between Oracle RDBMS and Apache Kafk...confluent
A Kafka cluster stores streams of records (messages) in categories called topics. It is the architectural backbone for integrating streaming data with a Data Lake, Microservices and Stream Processing. Today's enterprises have their core systems often implemented on top of relational databases, such as the Oracle RDBMS. Implementing a new solution supporting the digital strategy using Kafka and the ecosystem can not always be done completely separate from the traditional legacy solutions. Often streaming data has to be enriched with state data which is held in an RDBMS of a legacy application. It's important to cache this data in the stream processing solution, so that It can be efficiently joined to the data stream. But how do we make sure that the cache is kept up-to-date, if the source data changes? We can either poll for changes from Kafka using Kafka Connect or let the RDBMS push the data changes to Kafka. But what about writing data back to the legacy application, i.e. an anomaly is detected inside the stream processing solution which should trigger an action inside the legacy application. Using Kafka Connect we can write to a database table or view, which could trigger the action. But this not always the best option. If you have an Oracle RDBMS, there are many other ways to integrate the database with Kafka, such as Advanced Queueing (message broker in the database), CDC through Golden Gate or Debezium, Oracle REST Database Service (ORDS) and more. In this session, we present various blueprints for integrating an Oracle RDBMS with Apache Kafka in both directions and discuss how these blueprints can be implemented using the products mentioned before.
Ingesting and Processing IoT Data - using MQTT, Kafka Connect and KSQLGuido Schmutz
Internet of Things use cases are a perfect match for processing with a streaming platform such as Kafka and the Confluent Platform. Some of the questions to be answered are: How do we feed the data from our devices into Kafka? Do we directly send data to Kafka? Is Kafka accessible from outside the organization over the internet? What if we want to use a more specific IoT protocol such as MQTT or CoAP in between? How would we integrate it with Kafka? How can we enrich IoT streaming data with static data sitting in a traditional system?
This session will provide answers to these and other questions using a fictitious use case of a trucking company. Trucks are constantly sending data about position and driving habits, which can be used to derive real-time information and actions. A large part of the presentation will be a live demo. The demo will show the implementation of the pipeline incrementally: starting with sending the truck movement events directly to Kafka, then adding MQTT to the sensor data ingestion, followed by using Kafka Streams and KSQL to apply stream processing on the information received. The final pipeline will demonstrate the application of Kafka Connect with MQTT and JDBC source connectors for data ingestion and event stream enrichment, and Kafka Streams and KSQL for stream processing. The key takeaway is the live demonstration of a working end-to-end IoT streaming data ingestion pipeline using Kafka technologies.
Event Hub (i.e. Kafka) in Modern Data ArchitectureGuido Schmutz
Today's modern data architectures and the their implementations contain an Event Hub. What are the benefits of placing an Event Hub in a Modern Data (Analytics) Architecture? What exactly is an Event Hub and what capabilities should it provide? Why is Apache Kafka the most popular realization of an Event Hub?
These and many other questions will be answered in this session. The talk will start with a vendor-neutral definition of the capabilities of an Event Hub.
Then the session will highlight the different architecture styles which can be supported using an Event Hub (Kafka), such as Streaming Data Integration, Stream Analytics and Decoupled Event-Driven Applications and how can these be combined into a unified architecture, making the Event Hub the central nervous system of an enterprise architecture. We will end with an overview of the Kafka ecosystem and a placement of the various components onto the Modern Data (Analytics) Architecture.
Most data visualisation solutions today still work on data sources which are stored persistently in a data store, using the so called “data at rest” paradigms. More and more data sources today provide a constant stream of data, from IoT devices to Social Media streams. These data stream publish with high velocity and messages often have to be processed as quick as possible. For the processing and analytics on the data, so called stream processing solutions are available. But these only provide minimal or no visualisation capabilities. One option is to first persist the data into a data store and then use a traditional data visualisation solution to present the data. If latency is not an issue, such a solution might be good enough. An other question is which data store solution is necessary to keep up with the high load on write and read. If it is not an RDBMS but an NoSQL database, then not all traditional visualisation tools might already integrate with the specific data store. An other option is to use a Streaming Visualisation solution. They are specially built for streaming data and often do not support batch data. A much better solution would be to have one tool capable of handling both, batch and streaming data. This talk presents different architecture blueprints for integrating data visualisation into a fast data solution and then we show how the different blueprints can be implemented by mapping products onto the blueprints.
Solutions for bi-directional Integration between Oracle RDMBS & Apache KafkaGuido Schmutz
Apache Kafka is a popular distributed streaming data platform. A Kafka cluster stores streams of records (messages) in categories called topics. It is the architectural backbone of modern data analytics. Data flowing into Kafka often originates from native data streams such as social media streams, telemetry data, financial transactions and many others. But these data streams only contain part of the information. A lot of data necessary in stream processing is stored in traditional systems backed by relational databases. To implement new and modern, real-time solutions, an up-to-date view of that information is needed. So how do we make sure that information can flow between the RDMBS and Kafka, so that changes are available in Kafka as soon as possible in near-real-time? It this session, we present different approaches for integrating relational databases with Kafka, such as Kafka Connect, Oracle GoldenGate and bridging Kafka with Oracle Advanced Queuing (AQ).
Building event-driven (Micro)Services with Apache Kafka Guido Schmutz
What is a Microservices architecture and how does it differ from a Service-Oriented Architecture? Should you use traditional REST APIs to bind services together? Or is it better to use a richer, more loosely-coupled protocol? This talk will start with quick recap of how we created systems over the past 20 years and how different architectures evolved from it. The talk will show how we piece services together in event driven systems, how we use a distributed log (event hub) to create a central, persistent history of events and what benefits we achieve from doing so. Apache Kafka is a perfect match for building such an asynchronous, loosely-coupled event-driven backbone. Events trigger processing logic, which can be implemented in a more traditional as well as in a stream processing fashion. The talk will show the difference between a request-driven and event-driven communication and show when to use which. It highlights how the modern stream processing systems can be used to hold state both internally as well as in a database and how this state can be used to further increase independence of other services, the primary goal of a Microservices architecture.
Building Event Driven (Micro)services with Apache KafkaGuido Schmutz
What is a Microservices architecture and how does it differ from a Service-Oriented Architecture? Should you use traditional REST APIs to bind services together? Or is it better to use a richer, more loosely-coupled protocol? This talk will start with quick recap of how we created systems over the past 20 years and how different architectures evolved from it. The talk will show how we piece services together in event driven systems, how we use a distributed log (event hub) to create a central, persistent history of events and what benefits we achieve from doing so.
Apache Kafka is a perfect match for building such an asynchronous, loosely-coupled event-driven backbone. Events trigger processing logic, which can be implemented in a more traditional as well as in a stream processing fashion. The talk will show the difference between a request-driven and event-driven communication and show when to use which. It highlights how the modern stream processing systems can be used to hold state both internally as well as in a database and how this state can be used to further increase independence of other services, the primary goal of a Microservices architecture.
Building Event Driven (Micro)services with Apache KafkaGuido Schmutz
What is a Microservices architecture and how does it differ from a Service-Oriented Architecture? Should you use traditional REST APIs to bind services together? Or is it better to use a richer, more loosely-coupled protocol? This talk will start with quick recap of how we created systems over the past 20 years and how different architectures evolved from it. The talk will show how we piece services together in event driven systems, how we use a distributed log (event hub) to create a central, persistent history of events and what benefits we achieve from doing so.
Apache Kafka is a perfect match for building such an asynchronous, loosely-coupled event-driven backbone. Events trigger processing logic, which can be implemented in a more traditional as well as in a stream processing fashion. The talk will show the difference between a request-driven and event-driven communication and show when to use which. It highlights how the modern stream processing systems can be used to hold state both internally as well as in a database and how this state can be used to further increase independence of other services, the primary goal of a Microservices architecture.
Big Data, Data Lake, Fast Data - Dataserialiation-FormatsGuido Schmutz
The concept of "Data Lake" is in everyone's mind today. The idea of storing all the data that accumulates in a company in a central location and making it available sounds very interesting at first. But Data Lake can quickly turn from a clear, beautiful mountain lake into a huge pond, especially if it is inexpertly entrusted with all the source data formats that are common in today's enterprises, such as XML, JSON, CSV or unstructured text data. Who, after some time, still has an overview of which data, which format and how they have developed over different versions? Anyone who wants to help themselves from the Data Lake must ask themselves the same questions over and over again: what information is provided, what data types do they have and how has the content changed over time?
Data serialization frameworks such as Apache Avro and Google Protocol Buffer (Protobuf), which enable platform-independent data modeling and data storage, can help. This talk will discuss the possibilities of Avro and Protobuf and show how they can be used in the context of a data lake and what advantages can be achieved. The support on Avro and Protobuf by Big Data and Fast Data platforms is also a topic.
Solutions for bi-directional Integration between Oracle RDMBS & Apache KafkaGuido Schmutz
A Kafka cluster stores streams of records (messages) in categories called topics. It is the architectural backbone for integrating streaming data with a Data Lake, Microservices and Stream Processing. Today’s enterprises have their core systems often implemented on top of relational databases, such as the Oracle RDBMS. Implementing a new solution supporting the digital strategy using Kafka and the ecosystem can not always be done completely separate from the traditional legacy solutions. Often streaming data has to be enriched with state data which is held in an RDBMS of a legacy application. It’s important to cache this data in the stream processing solution, so that It can be efficiently joined to the data stream. But how do we make sure that the cache is kept up-to-date, if the source data changes? We can either poll for changes from Kafka using Kafka Connect or let the RDBMS push the data changes to Kafka. But what about writing data back to the legacy application, i.e. an anomaly is detected inside the stream processing solution which should trigger an action inside the legacy application. Using Kafka Connect we can write to a database table or view, which could trigger the action. But this not always the best option. If you have an Oracle RDBMS, there are many other ways to integrate the database with Kafka, such as Advanced Queueing (message broker in the database), CDC through Golden Gate or Debezium, Oracle REST Database Service (ORDS) and more. In this session, we present various blueprints for integrating an Oracle RDBMS with Apache Kafka in both directions and discuss how these blueprints can be implemented using the products mentioned before.
Location Analytics - Real-Time Geofencing using Kafka Guido Schmutz
An important underlying concept behind location-based applications is called geofencing. Geofencing is a process that allows acting on users and/or devices who enter/exit a specific geographical area, known as a geo-fence. A geo-fence can be dynamically generated—as in a radius around a point location, or a geo-fence can be a predefined set of boundaries (such as secured areas, buildings, boarders of counties, states or countries). Geofencing lays the foundation for realising use cases around fleet monitoring, asset tracking, phone tracking across cell sites, connected manufacturing, ride-sharing solutions and many others. Many of the use cases mentioned above require low-latency actions taken place, if either a device enters or leaves a geo-fence or when it is approaching such a geo-fence. That’s where streaming data ingestion and streaming analytics and therefore the Kafka ecosystem comes into play. This session will present how location analytics applications can be implemented using Kafka and KSQL & Kafka Streams. It highlights the exiting features available out-of-the-box and then shows how easy it is to extend it by custom defined functions (UDFs).
Batch and streaming visualization in big data reference architecture, architecture blueprints for streaming visualization, implementations of the blueprints in a fast data solution.
Most data visualisation solutions today still work on data sources which are stored persistently in a data store, using the so called “data at rest” paradigms. More and more data sources today provide a constant stream of data, from IoT devices to Social Media streams. These data stream publish with high velocity and messages often have to be processed as quick as possible. For the processing and analytics on the data, so called stream processing solutions are available. But these only provide minimal or no visualisation capabilities. Therefore, one can use a dedicated Streaming Visualisation solution. They are specially built for streaming data and often do not support batch data. A much better solution would be to have one tool capable of handling both, batch and streaming data. This talk presents different architecture blueprints for integrating data visualisation into a fast data solution and highlights some of the products available to implement these blueprints.
Building event-driven (Micro)Services with Apache Kafka EcosystemGuido Schmutz
Should you use traditional REST APIs to bind services together? Or is it better to use a richer, more loosely-coupled protocol? This talk will dive into how we piece services together in event driven systems, how we use a distributed log (event hub) to create a central, persistent history of events and what benefits we achieve from doing so. Apache Kafka is a perfect match for building such an asynchronous, loosely-coupled event-driven backbone. Events trigger processing logic, which can be implemented in a more traditional as well as in a stream processing fashion. The talk will show the difference between a request-driven and event-driven communication and show when to use which. It highlights how the modern stream processing systems can be used to hold state both internally as well as in a database and how this state can be used to further increase independence of other services, the primary goal of a Microservices architecture.
Spark (Structured) Streaming vs. Kafka Streams - two stream processing platfo...Guido Schmutz
Independent of the source of data, the integration and analysis of event streams gets more important in the world of sensors, social media streams and Internet of Things. Events have to be accepted quickly and reliably, they have to be distributed and analyzed, often with many consumers or systems interested in all or part of the events. In this session we compare two popular Streaming Analytics solutions: Spark Streaming and Kafka Streams.
Spark is fast and general engine for large-scale data processing and has been designed to provide a more efficient alternative to Hadoop MapReduce. Spark Streaming brings Spark's language-integrated API to stream processing, letting you write streaming applications the same way you write batch jobs. It supports both Java and Scala.
Kafka Streams is the stream processing solution which is part of Kafka. It is provided as a Java library and by that can be easily integrated with any Java application.
Most data visualization solutions today still work on the “data at rest” paradigm, where data is persisted first and then analyzed. But data sources today often come as a constant stream of data, from IoT devices to Social Media streams. These data streams publish information with high velocity and messages often have to be processed as quick as possible. For the processing and analytics, so called stream processing solutions are available. But these only provide minimal or no visualization capabilities. So how do we solve the visualization of high velocity data streams? An option is to first persist the data and then use a traditional data visualization solution to present the data. If latency is not an issue, this might be good enough. A NoSQL database might be an ideal solution, but not all traditional visualization tools might easily integrate with the specific data store. Another option is to use a dedicated Streaming Visualization solution. They are specially built for streaming data but on the other hand often do not support batch data. This talk presents different architecture blueprints for visualizing fast data and shows some products for implementing these blueprints.
Most data visualisation solutions today still work on data sources which are stored persistently in a data store, using the so called “data at rest” paradigms. More and more data sources today provide a constant stream of data, from IoT devices to Social Media streams. These data stream publish with high velocity and messages often have to be processed as quick as possible. For the processing and analytics on the data, so called stream processing solutions are available. But these only provide minimal or no visualisation capabilities. One option is to first persist the data into a data store and then use a traditional data visualisation solution to present the data. If latency is not an issue, such a solution might be good enough. An other question is which data store solution is necessary to keep up with the high load on write and read. If it is not an RDBMS but an NoSQL database, then not all traditional visualisation tools might already integrate with the specific data store. An other option is to use a Streaming Visualisation solution. They are specially built for streaming data and often do not support batch data. A much better solution would be to have one tool capable of handling both, batch and streaming data. This talk presents different architecture blueprints for integrating data visualisation into a fast data solutions and then we show how the different blueprints can be implemented by mapping products onto the blueprints.
Kafka as your Data Lake - is it Feasible?Guido Schmutz
For a long time we discuss how much data we can keep in Kafka. Can we store data forever or do we remove data after a while and maybe having the history in a data lake on Object Storage or HDFS? With the advent of Tiered Storage in Confluent Enterprise Platform, storing data much longer in Kafka is much very feasible. So can we replace a traditional data lake with just Kafka? Maybe at least for the raw data? But what about accessing the data, for example using SQL?
KSQL allows for processing data in a streaming fashion using an SQL like dialect. But what about reading all data of a topic? You can reset the offset and still use KSQL. But there is another family of products, so-called query engines for Big Data. They originate from the idea of reading Big Data sources such as HDFS, object storage or HBase, using the SQL language. Presto, Apache Drill and Dremio are the most popular solutions in that space. Lately these query engines also added support for Kafka topics as a source of data. With that you can read a topic as a table and join it with information available in other data sources. The idea of course is not real-time streaming analytics but batch analytics directly on the Kafka topic, without having to store it in a big data storage.
This talk answers, how well these tools support Kafka as a data source. What serialization formats do they support? Is there some form of predicate push-down supported or do we have to always read the complete topic? How performant is a query against a topic, compared to a query against the same data sitting in HDFS or an object store? And finally, will this allow us to replace our data lake or at least part of it by Apache Kafka?
Event Broker (Kafka) in a Modern Data ArchitectureGuido Schmutz
Today's modern data architectures and the their implementations contain an Event Broker. What are the benefits of placing an Event Broker in a Modern Data (Analytics) Architecture? What exactly is an Event Broker and what capabilities should it provide? Why is Apache Kafka the most popular realisation of an Event Broker?
These and many other questions will be answered in this session. The talk will start with a vendor-neutral definition of the capabilities of an Event Broker.
Then the session will highlight the different architecture styles which can be supported using an Event Broker (Kafka), such as Streaming Data Integration, Stream Analytics and Decoupled Event-Driven Applications and how can these be combined into a unified architecture, making the Event Broker the central nervous system of an enterprise architecture. We will end with an overview of the Kafka ecosystem and a placement of the various components onto the Modern Data (Analytics) Architecture.
ksqlDB is a stream processing SQL engine, which allows stream processing on top of Apache Kafka. ksqlDB is based on Kafka Stream and provides capabilities for consuming messages from Kafka, analysing these messages in near-realtime with a SQL like language and produce results again to a Kafka topic. By that, no single line of Java code has to be written and you can reuse your SQL knowhow. This lowers the bar for starting with stream processing significantly.
ksqlDB offers powerful capabilities of stream processing, such as joins, aggregations, time windows and support for event time. In this talk I will present how KSQL integrates with the Kafka ecosystem and demonstrate how easy it is to implement a solution using ksqlDB for most part. This will be done in a live demo on a fictitious IoT sample.
Building Event-Driven (Micro)Services with Apache KafkaGuido Schmutz
Should we use traditional REST APIs to bind services together? Or is it better to use a more loosely-coupled protocol? This talk will dive into how we piece services together in event driven systems, how we use a distributed log (event hub) to create a central, persistent history of events and what benefits we achieve from doing so. Apache Kafka is a perfect match for building an asynchronous, loosely-coupled event-driven backbone. Events trigger processing logic, which can be implemented in a traditional as well as in a stream processing fashion. The talk will show the difference between a request-driven and event-driven communication and show when to use which.
Solutions for bi-directional integration between Oracle RDBMS and Apache Kafk...confluent
A Kafka cluster stores streams of records (messages) in categories called topics. It is the architectural backbone for integrating streaming data with a Data Lake, Microservices and Stream Processing. Today's enterprises have their core systems often implemented on top of relational databases, such as the Oracle RDBMS. Implementing a new solution supporting the digital strategy using Kafka and the ecosystem can not always be done completely separate from the traditional legacy solutions. Often streaming data has to be enriched with state data which is held in an RDBMS of a legacy application. It's important to cache this data in the stream processing solution, so that It can be efficiently joined to the data stream. But how do we make sure that the cache is kept up-to-date, if the source data changes? We can either poll for changes from Kafka using Kafka Connect or let the RDBMS push the data changes to Kafka. But what about writing data back to the legacy application, i.e. an anomaly is detected inside the stream processing solution which should trigger an action inside the legacy application. Using Kafka Connect we can write to a database table or view, which could trigger the action. But this not always the best option. If you have an Oracle RDBMS, there are many other ways to integrate the database with Kafka, such as Advanced Queueing (message broker in the database), CDC through Golden Gate or Debezium, Oracle REST Database Service (ORDS) and more. In this session, we present various blueprints for integrating an Oracle RDBMS with Apache Kafka in both directions and discuss how these blueprints can be implemented using the products mentioned before.
CouchApps are web applications built using CouchDB, JavaScript, and HTML5. CouchDB is a document-oriented database that stores JSON documents, has a RESTful HTTP API, and is queried using map/reduce views. This talk will answer your basic questions about CouchDB, but will focus on building CouchApps and related tools.
Web APIs have revolutionized all kinds of products and services, and still continue to do so. Nowadays the most relevant architecture is REST along with the JSON media type. Furthermore, lots of specifications to serialize those media types are appearing. JSON API has released its first version last May.
Databases are like languages: it’s very useful to know more than one. NoSQL databases promise better performance, scaling, lower cost of ownership, and flexibility for many use cases. With recent advances in NoSQL including ACID transactions, SQL queries, scopes, collections, and more, making the jump to NoSQL is becoming more straightforward. In this session, you will learn how to automatically migrate a relational database (including tables, data, indexes, users, and even queries) over to a modern NoSQL database.
The three steps that will be covered include:
1. Lifting your legacy data and data structure into a modern database.
2. Shifting your legacy application and clients to use NoSQL
3. Refactoring your legacy data model to improve performance and efficiency
After this short session, you’ll have taken a huge leap to learning a new technology and providing benefits to your team and organization, including the ability to:
- Develop faster with SQL for JSON queries (N1QL), plus multi-modal key-value, full text search, and analytics capabilities
- Deploy everywhere from edge to cloud, wherever and however you want
- Perform optimally at scale with a built-in memory-first architecture for sub-millisecond operations
OData: Universal Data Solvent or Clunky Enterprise Goo? (GlueCon 2015)Pat Patterson
Why would anyone but the most pedestrian enterprise developer be interested in a data access protocol originally designed by Microsoft, implemented in XML and handed to OASIS for standardization? The Open Data Protocol, or OData for short, has evolved into a clean, RESTful interface for CRUD operations against data services. Alongside the usual enterprise suspects such as Microsoft, Salesforce and IBM, OData has been adopted by government and non-profit agencies to open up their data and make it accessible to the public. For developers wanting to consume data, or create their own OData services, there's no shortage of open source options, from Apache Olingo in Java to node-odata and ODataCpp. Whether you're accessing customer orders in SAP or the Whitehouse visitor book, you're going to need some OData smarts.
These are slides from our Big Data Warehouse Meetup in April. We talked about NoSQL databases: What they are, how they’re used and where they fit in existing enterprise data ecosystems.
Mike O’Brian from 10gen, introduced the syntax and usage patterns for a new aggregation system in MongoDB and give some demonstrations of aggregation using the new system. The new MongoDB aggregation framework makes it simple to do tasks such as counting, averaging, and finding minima or maxima while grouping by keys in a collection, complementing MongoDB’s built-in map/reduce capabilities.
For more information, visit our website at http://casertaconcepts.com/ or email us at info@casertaconcepts.com.
CouchDB Mobile - From Couch to 5K in 1 HourPeter Friese
In this talk, I explain how to use CouchDB mobile to connect your iPhone or Android phone with a a remote ChouchDB to build a RunKeeper clone. The code for this talk is available at https://github.com/peterfriese/CouchTo5K
Enterprise applications are complex making it difficult to fit everything in one model. NoSQL is taking a leading role in the next generation database technologies and polyglot persistence a good option to leverage the strength of multiple data stores. This talk will introduce the Spring Data project, an umbrella project that provides a familiar and consistent Spring-based programming model for a wide range of data access technologies such as Redis, MongoDB, HBase, Neo4j...while retaining store-specific features and capabilities.
Stop the noise! - Introduction to the JSON:API specification in DrupalBjörn Brala
If you’ve ever argued about the way your JSON responses should be formatted, JSON:API can be your anti-bikeshedding tool. JSON:API is a great way to expose a consistent API in your application.
In this session, we will talk about how JSON:API got to where it is today and how it can help you make Drupal the core of all your online transactions. We will check out the specifications and look at the main benefits of JSON:API and see how Drupal implemented the spec.
Expect to learn the structure and features of the JSON:API specifications and why it should be your smart default. You should be able to get started right away with some examples we will provide in this session.
30 Minutes to the Analytics Platform with Infrastructure as CodeGuido Schmutz
Analytical platforms for PoCs and evaluation can be built in the cloud in an hour - with ready-made setup scripts. But if you put the services together freely, it gets more difficult. The open-source platform-in-a-box "Platys" (https://github.com/TrivadisPF/platys) shows that it is easier for test and PoC environments. In addition to possible uses and examples, we explain services and "just briefly" set up a data lake with a database, event broker, stream processing, blob store, SQL access and data science notebook.
Event Hub (i.e. Kafka) in Modern Data (Analytics) ArchitectureGuido Schmutz
Today's modern data architectures and the their implementations contain an Event Hub. What are the benefits of placing an Event Hub in a Modern Data (Analytics) Architecture? What exactly is an Event Hub and what capabilities should it provide? Why is Apache Kafka the most popular realization of an Event Hub? These and many other questions will be answered in this session. The talk will start with a vendor-neutral definition of the capabilities of an Event Hub. Then the session will highlight the different architecture styles which can be supported using an Event Hub (Kafka), such as Streaming Data Integration, Stream Analytics and Decoupled Event-Driven Applications and how can these be combined into a unified architecture, making the Event Hub the central nervous system of an enterprise architecture. We will end with an overview of the Kafka ecosystem and a placement of the various components onto the Modern Data (Analytics) Architecture.
Location Analytics - Real-Time Geofencing using Apache KafkaGuido Schmutz
An important underlying concept behind location-based applications is called geofencing. Geofencing is a process that allows acting on users and/or devices who enter/exit a specific geographical area, known as a geo-fence. A geo-fence can be dynamically generated—as in a radius around a point location, or a geo-fence can be a predefined set of boundaries (such as secured areas, buildings, boarders of counties, states or countries).
Geofencing lays the foundation for realizing use cases around fleet monitoring, asset tracking, phone tracking across cell sites, connected manufacturing, ride-sharing solutions and many others.
GPS tracking tells constantly and in real time where a device is located and forms the stream of events which needs to be analyzed against the much more static set of geo-fences. Many of the use cases mentioned above require low-latency actions taken place, if either a device enters or leaves a geo-fence or when it is approaching such a geo-fence. That’s where streaming data ingestion and streaming analytics and therefore the Kafka ecosystem comes into play.
This session will present how location analytics applications can be implemented using Kafka and KSQL & Kafka Streams. It highlights the exiting features available out-of-the-box and then shows how easy it is to extend it by custom defined functions (UDFs). The design of such solution so that it can scale with both an increasing amount of position events as well as geo-fences will be discussed as well.
Location Analytics Real-Time Geofencing using KafkaGuido Schmutz
An important underlying concept behind location-based applications is called geofencing. Geofencing is a process that allows acting on users and/or devices who enter/exit a specific geographical area, known as a geo-fence. A geo-fence can be dynamically generated—as in a radius around a point location, or a geo-fence can be a predefined set of boundaries (such as secured areas, buildings, boarders of counties, states or countries).
Geofencing lays the foundation for realizing use cases around fleet monitoring, asset tracking, phone tracking across cell sites, connected manufacturing, ride-sharing solutions and many others.
GPS tracking tells constantly and in real time where a device is located and forms the stream of events which needs to be analyzed against the much more static set of geo-fences. Many of the use cases mentioned above require low-latency actions taken place, if either a device enters or leaves a geo-fence or when it is approaching such a geo-fence. That’s where streaming data ingestion and streaming analytics and therefore the Kafka ecosystem comes into play.
This session will present how location analytics applications can be implemented using Kafka and KSQL & Kafka Streams. It highlights the exiting features available out-of-the-box and then shows how easy it is to extend it by custom defined functions (UDFs). The design of such solution so that it can scale with both an increasing amount of position events as well as geo-fences will be discussed as well.
Fundamentals Big Data and AI ArchitectureGuido Schmutz
The right architecture is key for any IT project. This is especially the case for big data projects, where there are no standard architectures which have proven their suitability over years. This session discusses the different Big Data Architectures which have evolved over time, including traditional Big Data Architecture, Streaming Analytics architecture as well as Lambda and Kappa architecture and presents the mapping of components from both Open Source as well as the Oracle stack onto these architectures.
The right architecture is key for any IT project. This is valid in the case for big data projects as well, but on the other hand there are not yet many standard architectures which have proven their suitability over years.
This session discusses different Big Data Architectures which have evolved over time, including traditional Big Data Architecture, Event Driven architecture as well as Lambda and Kappa architecture.
Each architecture is presented in a vendor- and technology-independent way using a standard architecture blueprint. In a second step, these architecture blueprints are used to show how a given architecture can support certain use cases and which popular open source technologies can help to implement a solution based on a given architecture.
Location Analytics - Real Time Geofencing using Apache KafkaGuido Schmutz
An important underlying concept behind location-based applications is called geofencing. Geofencing is a process that allows acting on users and/or devices who enter/exit a specific geographical area, known as a geo-fence. A geo-fence can be dynamically generated—as in a radius around a point location, or a geo-fence can be a predefined set of boundaries (such as secured areas, buildings, boarders of counties, states or countries).
Geofencing lays the foundation for realizing use cases around fleet monitoring, asset tracking, phone tracking across cell sites, connected manufacturing, ride-sharing solutions and many others.
GPS tracking tells constantly and in real time where a device is located and forms the stream of events which needs to be analyzed against the much more static set of geo-fences. Many of the use cases mentioned above require low-latency actions taken place, if either a device enters or leaves a geo-fence or when it is approaching such a geo-fence. That’s where streaming data ingestion and streaming analytics and therefore the Kafka ecosystem comes into play.
This session will present how location analytics applications can be implemented using Kafka and KSQL & Kafka Streams. It highlights the exiting features available out-of-the-box and then shows how easy it is to extend it by custom defined functions (UDFs). The design of such solution so that it can scale with both an increasing amount of position events as well as geo-fences will be discussed as well.
Building Event-Driven (Micro) Services with Apache KafkaGuido Schmutz
This talk begins with a short recap of how we created systems over the past 20 years, up to the current idea of building systems, using a Microservices architecture. What is a Microservices architecture and how does it differ from a Service-Oriented Architecture? Should you use traditional REST APIs to integrate services with each eachother in a Microservices Architecture? Or is it better to use a more loosely-coupled protocol? Answers to these and many other questions are provided. The talk will show how a distributed log (event hub) can help to create a central, persistent history of events and what benefits we achieve from doing so. Apache Kafka is a perfect match for building such an asynchronous, loosely-coupled event-driven backbone. Events trigger processing logic, which can be implemented in a more traditional as well as in a stream processing fashion. The talk shows the difference between a request-driven and event-driven communication and answers when to use which. It highlights how a modern stream processing systems can be used to hold state both internally as well as in a database and how this state can be used to further increase independence of other services, the primary goal of a Microservices architecture.
Stream Processing – Concepts and FrameworksGuido Schmutz
More and more data sources today provide a constant stream of data, from IoT devices to Social Media streams. It is one thing to collect these events in the velocity they arrive, without losing any single message. An Event Hub and a data flow engine can help here. It’s another thing to do some (complex) analytics on the data. There is always the option to first store in a data sink of choice and later analyze it. Storing even a high-volume event stream is feasible and not a challenge anymore. But this adds to the end-to-end latency and it takes minutes if not hours to present results. If you need to react fast, you simply can’t afford to first store the data. You need to do process it directly on the data stream. This is called Stream Processing or Stream Analytics. In this talk I will present the important concepts, a Stream Processing solution should support and then dive into some of the most popular frameworks available on the market and how they compare.
Kafka as an Event Store - is it Good Enough?Guido Schmutz
Event Sourcing and CQRS are two popular patterns for implementing a Microservices architectures. With Event Sourcing we do not store the state of an object, but instead store all the events impacting its state. Then to retrieve an object state, we have to read the different events related to a certain object and apply them one by one. CQRS (Command Query Responsibility Segregation) on the other hand is a way to dissociate writes (Command) and reads (Query). Event Sourcing and CQRS are frequently grouped and used together to form something bigger. While it is possible to implement CQRS without Event Sourcing, the opposite is not necessarily correct. In order to implement Event Sourcing, an efficient Event Store is needed. But is that also true when combining Event Sourcing and CQRS? And what is an event store in the first place and what features should it implement? This presentation will first discuss what functionalities an event store should offer and then present how Apache Kafka can be used to implement an event store. But is Kafka good enough or do specific event store solutions such as AxonDB or Event Store provide a better solution?
Independent of the source of data, the integration of event streams into an Enterprise Architecture gets more and more important in the world of sensors, social media streams and Internet of Things. Events have to be accepted quickly and reliably, they have to be distributed and analysed, often with many consumers or systems interested in all or part of the events. Storing such huge event streams into HDFS or a NoSQL datastore is feasible and not such a challenge anymore. But if you want to be able to react fast, with minimal latency, you can not afford to first store the data and doing the analysis/analytics later. You have to be able to include part of your analytics right after you consume the data streams. Products for doing event processing, such as Oracle Event Processing or Esper, are avaialble for quite a long time and used to be called Complex Event Processing (CEP). In the past few years, another family of products appeared, mostly out of the Big Data Technology space, called Stream Processing or Streaming Analytics. These are mostly open source products/frameworks such as Apache Storm, Spark Streaming, Flink, Kafka Streams as well as supporting infrastructures such as Apache Kafka. In this talk I will present the theoretical foundations for Stream Processing, discuss the core properties a Stream Processing platform should provide and highlight what differences you might find between the more traditional CEP and the more modern Stream Processing solutions.
As Europe's leading economic powerhouse and the fourth-largest hashtag#economy globally, Germany stands at the forefront of innovation and industrial might. Renowned for its precision engineering and high-tech sectors, Germany's economic structure is heavily supported by a robust service industry, accounting for approximately 68% of its GDP. This economic clout and strategic geopolitical stance position Germany as a focal point in the global cyber threat landscape.
In the face of escalating global tensions, particularly those emanating from geopolitical disputes with nations like hashtag#Russia and hashtag#China, hashtag#Germany has witnessed a significant uptick in targeted cyber operations. Our analysis indicates a marked increase in hashtag#cyberattack sophistication aimed at critical infrastructure and key industrial sectors. These attacks range from ransomware campaigns to hashtag#AdvancedPersistentThreats (hashtag#APTs), threatening national security and business integrity.
🔑 Key findings include:
🔍 Increased frequency and complexity of cyber threats.
🔍 Escalation of state-sponsored and criminally motivated cyber operations.
🔍 Active dark web exchanges of malicious tools and tactics.
Our comprehensive report delves into these challenges, using a blend of open-source and proprietary data collection techniques. By monitoring activity on critical networks and analyzing attack patterns, our team provides a detailed overview of the threats facing German entities.
This report aims to equip stakeholders across public and private sectors with the knowledge to enhance their defensive strategies, reduce exposure to cyber risks, and reinforce Germany's resilience against cyber threats.
Explore our comprehensive data analysis project presentation on predicting product ad campaign performance. Learn how data-driven insights can optimize your marketing strategies and enhance campaign effectiveness. Perfect for professionals and students looking to understand the power of data analysis in advertising. for more details visit: https://bostoninstituteofanalytics.org/data-science-and-artificial-intelligence/
Adjusting primitives for graph : SHORT REPORT / NOTESSubhajit Sahu
Graph algorithms, like PageRank Compressed Sparse Row (CSR) is an adjacency-list based graph representation that is
Multiply with different modes (map)
1. Performance of sequential execution based vs OpenMP based vector multiply.
2. Comparing various launch configs for CUDA based vector multiply.
Sum with different storage types (reduce)
1. Performance of vector element sum using float vs bfloat16 as the storage type.
Sum with different modes (reduce)
1. Performance of sequential execution based vs OpenMP based vector element sum.
2. Performance of memcpy vs in-place based CUDA based vector element sum.
3. Comparing various launch configs for CUDA based vector element sum (memcpy).
4. Comparing various launch configs for CUDA based vector element sum (in-place).
Sum with in-place strategies of CUDA mode (reduce)
1. Comparing various launch configs for CUDA based vector element sum (in-place).
Opendatabay - Open Data Marketplace.pptxOpendatabay
Opendatabay.com unlocks the power of data for everyone. Open Data Marketplace fosters a collaborative hub for data enthusiasts to explore, share, and contribute to a vast collection of datasets.
First ever open hub for data enthusiasts to collaborate and innovate. A platform to explore, share, and contribute to a vast collection of datasets. Through robust quality control and innovative technologies like blockchain verification, opendatabay ensures the authenticity and reliability of datasets, empowering users to make data-driven decisions with confidence. Leverage cutting-edge AI technologies to enhance the data exploration, analysis, and discovery experience.
From intelligent search and recommendations to automated data productisation and quotation, Opendatabay AI-driven features streamline the data workflow. Finding the data you need shouldn't be a complex. Opendatabay simplifies the data acquisition process with an intuitive interface and robust search tools. Effortlessly explore, discover, and access the data you need, allowing you to focus on extracting valuable insights. Opendatabay breaks new ground with a dedicated, AI-generated, synthetic datasets.
Leverage these privacy-preserving datasets for training and testing AI models without compromising sensitive information. Opendatabay prioritizes transparency by providing detailed metadata, provenance information, and usage guidelines for each dataset, ensuring users have a comprehensive understanding of the data they're working with. By leveraging a powerful combination of distributed ledger technology and rigorous third-party audits Opendatabay ensures the authenticity and reliability of every dataset. Security is at the core of Opendatabay. Marketplace implements stringent security measures, including encryption, access controls, and regular vulnerability assessments, to safeguard your data and protect your privacy.
3. BASEL | BERN | BRUGG | BUKAREST | DÜSSELDORF | FRANKFURT A.M. | FREIBURG I.BR. | GENF
HAMBURG | KOPENHAGEN | LAUSANNE | MANNHEIM | MÜNCHEN | STUTTGART | WIEN | ZÜRICH
Guido
Working at Trivadis for more than 22 years
Consultant, Trainer, Platform Architect for Java,
Oracle, SOA and Big Data / Fast Data
Oracle Groundbreaker Ambassador & Oracle ACE
Director
@gschmutz guidoschmutz.wordpress.com
171st
edition
6. Microservices / Modern Applications
• Highly decoupled
• Independently deployable
• Bounded Context/Aggregate (DDD)
• Responsible for their data
• Favour asynchronous, event-driven
interaction over synchronous
• Smart Endpoints and Dump Pipes
• Use Anti-Corruption Layer (ACL) if no
fit! M3M2
ACL
Event
Hub
M1
7. Microservices / Modern Applications
Integrate with Traditional System
M3M2
ACL
Event
Hub
M1
ACL
• Highly decoupled
• Independently deployable
• Bounded Context/Aggregate (DDD)
• Responsible for their data
• Favour asynchronous, event-driven
interaction over synchronous
• Smart Endpoints and Dump Pipes
• Use Anti-Corruption Layer (ACL) if no
fit!
Traditional
App
8. Use Case
Customer Microservice
{ }
Customer API CustomerCustomer Logic
Order Processing System
{ }
Order API OrderOrder Logic
REST
REST
Event Hub
Customer
Mat View
Order
Customer
(compacted)
Notification Microservice
Notification Logic
Flat Aggregate
Low Latency High Latency
DB Dataflow
Message
Latency
ACL
Open Source CommercialLicense
“Modern Apps”Traditional Apps (Legacy)
9. MessageMessageMessageMessage
MessageMessage
Properties - Message
Message Message
A1 A2 A3
Message
B1 B2 B3 B4
A1 A2 A3 B
B1 B2 B3 B4
Table A
A1
A2
A3
Table B
B1
B2
B3
B4
FlatDB Model Aggregate
Flat Aggregate
Low Latency High Latency
DB Dataflow
Message
Latency
ACL
Open Source CommercialLicense
10. Properties - Latency
Traditional System Event
Hub
Data
Flow
RDBMS
latency
latency
Flat Aggregate
Low Latency High Latency
DB Dataflow
Message
Latency
ACL
Open Source CommercialLicense
11. Properties – Anti-Corruption Layer (ACL)
Traditional System Event
Hub
Data
Flow
RDBMS
Traditional System Event
Hub
Data
Flow
RDBMS
ACL
ACL
Flat Aggregate
Low Latency High Latency
DB Dataflow
Message
Latency
ACL
Open Source CommercialLicense
Examples:
• View Layer
• Storage Procedure
• JSON Support in DB
• …
Examples:
• StreamSets
• Kafka Connect
• Kafka Streams / KSQL
• …
Database Dataflow
13. Blueprints Oracle RDBMS => Apache Kafka (DB-K)
Customer Microservice
{ }
Customer API CustomerCustomer Logic
Order Processing System
{ }
Order API OrderOrder Logic
REST
REST
Event Hub
Customer
Mat View
Order
(compacted)
Customer
(compacted)
Notification Microservice
Notification Logic
Schema
Registry
DB-K_1: Polling of RDBMS table/view
DB-K_2: Change Data Capture (CDC) on RDBMS
DB-K_3: Polling of RDBMS API
DB-K_4: Produce to Event Hub from RDBMS
DB-K_5: RDBMS Que with bridge to Event Hub
DB_K-1
DB_K-2
DB_K-3
DB_K-4
DB_K-5
14. DB-K_1: Polling of RDBMS table/view
Event
Hub
Stream Data
Integration
API
Applications / Data Sources
Data FlowRDBMS
Application
Logic
Stream Data
Integration & Analytics
Stream
Analytics
Data Flow
Flat Aggregate
Low Latency High Latency
DB Dataflow
Message
Latency
ACL
Open Source CommercialLicense
15. Event
Hub
Stream Data
Integration
API
Applications / Data Sources
Data FlowRDBMS
Application
Logic
Stream Data
Integration & Analytics
Stream
Analytics
Data Flow
Flat Aggregate
Low Latency High Latency
DB Dataflow
Message
Latency
ACL
Open Source CommercialLicense
DB-K_1: Polling of RDBMS table/view
Kafka Connect with JDBC Source Connector
16. Kafka Connect & JDBC Connector
• Many connectors available
• Single Message Transforms (SMT)
• declarative style, simple data flows
• framework is part of Apache Kafka
18. DB-K_2: Change Data Capture (CDC) on RDBMS
Stream Data
Integration & Analytics
Stream
Analytics
Event
Hub
Stream Data
Integration
API
Data Flow
Application / Data Sources
Data Flow
Application
Logic
RDBMS
Redo Log
REST to
Event Hub
Flat Aggregate
Low Latency High Latency
DB Dataflow
Message
Latency
ACL
Open Source CommercialLicense
19. Stream Data
Integration & Analytics
Stream
Analytics
Event
Hub
Stream Data
Integration
API
Data Flow
Application / Data Sources
Data Flow
Application
Logic
RDBMS
Redo Log
REST to
Event Hub
Rest Proxy
DB-K_2: Change Data Capture (CDC) on RDBMS
Using Oracle GoldenGate
Flat Aggregate
Low Latency High Latency
DB Dataflow
Message
Latency
ACL
Open Source CommercialLicense
Alternatives:
StreamSets Data Collector
Attunity
Debezium
…
20. DB-K_3: Polling of RDBMS API
Event
Hub
Stream Data
Integration
API
Applications / Data Sources
Data Flow
RDBMS
Application
Logic
API
Stream Data
Integration & Analytics
Stream
Analytics
Data Flow
Flat Aggregate
Low Latency High Latency
DB Dataflow
Message
Latency
ACL
Open Source CommercialLicense
21. Event
Hub
Stream Data
Integration
API
Applications / Data Sources
Data Flow
RDBMS
Application
Logic
API
Stream Data
Integration & Analytics
Stream
Analytics
Data Flow
DB-K_3: Polling of RDBMS API
StreamSets invokes Oracle Rest Data Service
Flat Aggregate
Low Latency High Latency
DB Dataflow
Message
Latency
ACL
Open Source CommercialLicense
22. Oracle REST Data Services (ORDS)
• makes it easy to develop modern REST interfaces for relational data in the
Oracle Database and the Oracle Database 18c JSON Document Store
• ORDS maps HTTP(S) verbs (GET, POST, PUT, DELETE, etc.) to database
transactions and returns any results formatted using JSON
• Java middle tier application on WebLogic, Tomcat, Docker, Standalone (for
development)
24. DB-K_3 – Setup ORDS (II)
ORDS.DEFINE_HANDLER(
p_module_name => 'order_processing',
p_pattern => 'changes/:offset',
p_method => 'GET',
p_source_type => 'resource/lob',
p_items_per_page => 25,
p_source =>
'SELECT ''application/json'', json_object(''orderId'' VALUE po.id,
''orderDate'' VALUE po.order_date,
''orderMode'' VALUE po.order_mode,
''customer'' VALUE
json_object(''firstName'' VALUE cu.first_name,
''lastName'' VALUE cu.last_name
''emailAddress'' VALUE cu.email),
''lineItems'' VALUE (SELECT json_arrayagg(
json_object(''ItemNumber'' VALUE li.id,
''Product'' VALUE
json_object(''id'' VALUE li.product_id,
''name'' VALUE li.product_name,
''unitPrice'' VALUE li.unit_price),
''quantity'' VALUE li.quantity))
FROM order_item_t li WHERE po.id = li.order_id),
''offset'' VALUE TO_CHAR(po.modified_at, ''YYYYMMDDHH24MISS''))
FROM order_t po LEFT JOIN customer_t cu ON (po.customer_id = cu.id)
WHERE po.modified_at > TO_DATE(:offset, ''YYYYMMDDHH24MISS'')'
25. StreamSets Data Collector
• GUI-based, drag-and
drop Data Flow Pipelines
• Both stream and batch
processing
• custom sources, sinks,
processors
• Monitoring and Error
Detection
26. DB-K_4: Produce to Event Hub from RDBMS
Event
Hub
Stream Data
Integration
API
Applications / Data Sources
RDBMS
Application
Logic
API
Stream Data
Integration & Analytics
Stream
Analytics
Data Flow
REST to
Event Hub
Flat Aggregate
Low Latency High Latency
DB Dataflow
Message
Latency
ACL
Open Source CommercialLicense
27. Event
Hub
Stream Data
Integration
API
Applications / Data Sources
RDBMS
Application
Logic
API
Stream Data
Integration & Analytics
Stream
Analytics
Data Flow
REST to
Event Hub
Flat Aggregate
Low Latency High Latency
DB Dataflow
Message
Latency
ACL
Open Source CommercialLicense
DB-K_4: Produce to Event Hub from RDBMS
Native Kafka Producer using Java in DB
Does not feel right!
28. Event
Hub
Stream Data
Integration
API
Applications / Data Sources
RDBMS
Application
Logic
API
Stream Data
Integration & Analytics
Stream
Analytics
Data Flow
REST to
Event Hub
Rest Proxy
?
Flat Aggregate
Low Latency High Latency
DB Dataflow
Message
Latency
ACL
Open Source CommercialLicense
DB-K_4: Produce to Event Hub from RDBMS
Invoke REST Proxy from PL/SQL
Invoking a REST Service
from DB not well-supported
29. Event
Hub
Stream Data
Integration
API
Applications / Data Sources
RDBMS
Application
Logic
API
Stream Data
Integration & Analytics
Stream
Analytics
Data Flow
REST to
Event Hub
Oracle Big Data SQL
Coming soon …
DB-K_4: Produce to Event Hub from RDBMS
Oracle Big Data SQL integrates with Kafka
Flat Aggregate
Low Latency High Latency
DB Dataflow
Message
Latency
ACL
Open Source CommercialLicense
30. DB-K_5: RDBMS Queue with bridge to Event Hub
Stream Data
Integration & Analytics
Stream
Analytics
Event
Hub
Stream Data
Integration
API
Data Flow
Application / Data Sources
Data Flow
Application
Logic
RDBMS
Queue
Flat Aggregate
Low Latency High Latency
DB Dataflow
Message
Latency
ACL
Open Source CommercialLicense
31. DB-K_5: RDBMS Queue with bridge to Event Hub
Oracle Advanced Queuing & Kafka Connect JMS
Stream Data
Integration & Analytics
Stream
Analytics
Event
Hub
Stream Data
Integration
API
Data Flow
Application / Data Sources
Data Flow
Application
Logic
RDBMS
QueueAQ
Flat Aggregate
Low Latency High Latency
DB Dataflow
Message
Latency
ACL
Open Source CommercialLicense
34. DB-K_5: RDBMS Queue with bridge to Event Hub
Oracle AQ with Kafka API & MirrorMaker
Stream Data
Integration & Analytics
Stream
Analytics
Event
Hub
Stream Data
Integration
API
Data Flow
Application / Data Sources
Data Flow
Application
Logic
RDBMS
Queue
Flat Aggregate
Low Latency High Latency
DB Dataflow
Message
Latency
ACL
Open Source CommercialLicense
AQ (Kafka API)
Oracle works on a Kafka API
for Advanced Queuing
36. Blueprints Apache Kafka => Oracle RDBMS (K-DB)
Customer Microservice
{ }
Customer API CustomerCustomer Logic
Order Processing System
{ }
Order API OrderOrder Logic
REST
REST
Event Hub
Customer
Mat View
Order
(compacted)
Customer
(compacted)
Notification Microservice
Notification Logic
Schema
Registry
K-DB_1: Write to RDBMS table/view
K-DB_2: Write over RDBMS API
K-DB_3: Consume from Event Hub
K-DB_4: Event Hub with bridge to RDBMS Queue
K_DB-1
K_DB-2
K_DB-3
K_DB-4
37. K-DB_1: Write to RDBMS table/view
Event
Hub
Stream Data
Integration
API
Applications / Data Sources
Data FlowRDBMS
Application
Logic
Stream Data
Integration & Analytics
Stream
Analytics
Data Flow
Flat Aggregate
Low Latency High Latency
DB Dataflow
Message
Latency
ACL
Open Source CommercialLicense
38. Event
Hub
Stream Data
Integration
API
Applications / Data Sources
Data FlowRDBMS
Application
Logic
Stream Data
Integration & Analytics
Stream
Analytics
Data Flow
Flat Aggregate
Low Latency High Latency
DB Dataflow
Message
Latency
ACL
Open Source CommercialLicense
K-DB_1: Write to RDBMS table/view
Kafka Connect and JDBC Sink Connector
39. K-DB_2: Write over RDBMS API
Event
Hub
Stream Data
Integration
API
Applications / Data Sources
Data Flow
RDBMS
Application
Logic
API
Stream Data
Integration & Analytics
Stream
Analytics
Data Flow
Flat Aggregate
Low Latency High Latency
DB Dataflow
Message
Latency
ACL
Open Source CommercialLicense
40. Event
Hub
Stream Data
Integration
API
Applications / Data Sources
Data Flow
RDBMS
Application
Logic
API
Stream Data
Integration & Analytics
Stream
Analytics
Data Flow
Flat Aggregate
Low Latency High Latency
DB Dataflow
Message
Latency
ACL
Open Source CommercialLicense
K-DB_2: Write over RDBMS API
Kafka Connect invokes Oracle Rest Data Service
42. K-DB_3: Consume from Event Hub
Event
Hub
Stream Data
Integration
API
Applications / Data Sources
RDBMS
Application
Logic
API
Stream Data
Integration & Analytics
Stream
Analytics
Data Flow
REST to
Event Hub
Flat Aggregate
Low Latency High Latency
DB Dataflow
Message
Latency
ACL
Open Source CommercialLicense
43. Event
Hub
Stream Data
Integration
API
Applications / Data Sources
RDBMS
Application
Logic
API
Stream Data
Integration & Analytics
Stream
Analytics
Data Flow
REST to
Event Hub
Oracle Big Data SQL
Coming soon…
K-DB_3: Consume from Event Hub
Oracle Big Data SQL exposes topic as table
Flat Aggregate
Low Latency High Latency
DB Dataflow
Message
Latency
ACL
Open Source CommercialLicense
44. K-DB_4: Event Hub with bridge to RDBMS queue
Stream Data
Integration & Analytics
Stream
Analytics
Event
Hub
Stream Data
Integration
API
Data Flow
Application / Data Sources
Data Flow
Application
Logic
RDBMS
Queue
Flat Aggregate
Low Latency High Latency
DB Dataflow
Message
Latency
ACL
Open Source CommercialLicense
45. Stream Data
Integration & Analytics
Stream
Analytics
Event
Hub
Stream Data
Integration
API
Data Flow
Application / Data Sources
Data Flow
Application
Logic
RDBMS
QueueAQ
K-DB_4: Event Hub with bridge to RDBMS queue
Oracle Advanced Queuing & Kafka Connect JMS
Flat Aggregate
Low Latency High Latency
DB Dataflow
Message
Latency
ACL
Open Source CommercialLicense
46. Stream Data
Integration & Analytics
Stream
Analytics
Event
Hub
Stream Data
Integration
API
Data Flow
Application / Data Sources
Data Flow
Application
Logic
RDBMS
QueueAQ (Kafka API)
Oracle works on a Kafka API
for Advanced Queuing
K-DB_4: Event Hub with bridge to RDBMS queue
Oracle AQ with Kafka API & MirrorMaker
Flat Aggregate
Low Latency High Latency
DB Dataflow
Message
Latency
ACL
Open Source CommercialLicense
48. Summary
Customer Microservice
{ }
Customer API CustomerCustomer Logic
Order Processing System
{ }
Order API OrderOrder Logic
REST
REST
Event Hub
Customer
Mat View
Order
(compacted)
Customer
(compacted)
Notification Microservice
Notification Logic
Schema
Registry
K-DB_1: Write to RDBMS table/view
K-DB_2: Write over RDBMS API
K-DB_3: Consume from Event Hub
K-DB_4: Event Hub with bridge to RDBMS Queue
K_DB-1
K_DB-2
K_DB-3
K_DB-4
DB_K-1
DB_K-2
DB_K-3
DB_K-4
DB_K-5
https://github.com/gschmutz/various-demos/tree/master/bidirectional-integration-oracle-kafka
DB-K_1: Polling of RDBMS table/view
DB-K_2: Change Data Capture (CDC) on RDBMS
DB-K_3: Polling of RDBMS API
DB-K_4: Produce to Event Hub from RDBMS
DB-K_5: RDBMS Queue with bridge to Event Hub
49. Bulk Source
Ref Architecture
Data Platform
Service
Event
Stream
Bulk
Data
Flow
Event Source
Location
DB
Extract
File
Weather
DB
IoT
Data
Mobile
Apps
Social
File Import / SQL Import
Consumer
BI Apps
Data Science
Workbench
Enterprise
App
Enterprise Data
Warehouse
SQL / Search
SQL
“Native” Raw
RDBMS
“SQL” / Search
Service
Event
Hub
Hadoop ClusterdHadoop ClusterBig Data Platform
SQL
Export
Storage
Storage
Raw
Refined/
UsageOpt
Microservice Cluster
Stream Processing Cluster
Stream
Processor
Model /
State
Edge Node
Rules
Event Hub
Storage
Governance
Data Catalog
Rules
Engine
Parallel
Processing
Query
Engine
Microservice Data
{ }
API
Event
Stream
Event Stream
Modern Data Platform
Event Stream
50. Bulk Source
Ref Architecture
Data Platform
Service
Event
Stream
Bulk
Data
Flow
Event Source
Location
DB
Extract
File
Weather
DB
IoT
Data
Mobile
Apps
Social
File Import / SQL Import
Consumer
BI Apps
Data Science
Workbench
Enterprise
App
Enterprise Data
Warehouse
SQL / Search
SQL
“Native” Raw
RDBMS
“SQL” / Search
Service
sEvent
Hub
Hadoop ClusterdHadoop ClusterBig Data Platform
SQL
Export
Storage
Storage
Raw
Refined/
UsageOpt
Microservice Cluster
Stream Processing Cluster
Stream
Processor
Model /
State
Edge Node
Rules
Event Hub
Storage
Governance
Data Catalog
Rules
Engine
Parallel
Processing
Query
Engine
Microservice Data
{ }
API
Event
Stream
Event Stream
Modern Data Platform
Event Stream