Confluent hosted a technical thought leadership session to discuss how leading organisations move to real-time architecture to support business growth and enhance customer experience.
Kafka Streams State Stores Being Persistentconfluent
This document discusses Kafka Streams state stores. It provides examples of using different types of windowing (tumbling, hopping, sliding, session) with state stores. It also covers configuring state store logging, caching, and retention policies. The document demonstrates how to define windowed state stores in Kafka Streams applications and discusses concepts like grace periods.
A guide through the Azure Messaging services - Update ConferenceEldert Grootenboer
https://www.updateconference.net/en/2019/session/a-guide-through-the-azure-messaging-services
A guide through the Azure Messaging services - Update Conference
Building event-driven Microservices with Kafka EcosystemGuido Schmutz
This session will begin with a short recap of how we created systems over the past 20 years, up to the current idea of building systems, using a Microservices architecture. What is a Microservices architecture and how does it differ from a Service-Oriented Architecture? Should you use traditional REST APIs to integrate services with each each other in a Microservices Architecture? Or is it better to use a more loosely-coupled protocol? Answers to these and many other questions are provided. The talk will show how a distributed log (event hub) can help to create a central, persistent history of events and what benefits we achieve from doing so. Apache Kafka is a perfect match for building such an asynchronous, loosely-coupled event-driven backbone. Events trigger processing logic, which can be implemented in a more traditional as well as in a stream processing fashion. The talk shows the difference between a request-driven and event-driven communication and answers when to use which. It highlights how a modern stream processing system can be used to hold state both internally as well as in a database and how this state can be used to further increase independence of other services, the primary goal of a Microservices architecture.
CCT is a web app developed to help the project manager to have an overview of the transfers made by his team. It is a web app developed entirely in Python, HTML, css. I also used Flask to connect to the server.
CCT allowed to: add new transfers, show charts related to the types of costs and value, produce a PDF document to download, automatically calculate the sum of the costs made and look for new users on Github to cover missing skills.
Top 5 Event Streaming Use Cases for 2021 with Apache KafkaKai Wähner
Apache Kafka and Event Streaming are two of the most relevant buzzwords in tech these days. Ever wonder what the predicted TOP 5 Event Streaming Architectures and Use Cases for 2021 are? Check out the following presentation. Learn about edge deployments, hybrid and multi-cloud architectures, service mesh-based microservices, streaming machine learning, and cybersecurity.
On-demand video recording: https://videos.confluent.io/watch/XAjxV3j8hzwCcEKoZVErUJ
Event Driven Architecture with a RESTful Microservices Architecture (Kyle Ben...confluent
Tinder’s Quickfire Pipeline powers all things data at Tinder. It was originally built using AWS Kinesis Firehoses and has since been extended to use both Kafka and other event buses. It is the core of Tinder’s data infrastructure. This rich data flow of both client and backend data has been extended to service a variety of needs at Tinder, including Experimentation, ML, CRM, and Observability, allowing backend developers easier access to shared client side data. We perform this using many systems, including Kafka, Spark, Flink, Kubernetes, and Prometheus. Many of Tinder’s systems were natively designed in an RPC first architecture.
Things we’ll discuss decoupling your system at scale via event-driven architectures include:
– Powering ML, backend, observability, and analytical applications at scale, including an end to end walk through of our processes that allow non-programmers to write and deploy event-driven data flows.
– Show end to end the usage of dynamic event processing that creates other stream processes, via a dynamic control plane topology pattern and broadcasted state pattern
– How to manage the unavailability of cached data that would normally come from repeated API calls for data that’s being backfilled into Kafka, all online! (and why this is not necessarily a “good” idea)
– Integrating common OSS frameworks and libraries like Kafka Streams, Flink, Spark and friends to encourage the best design patterns for developers coming from traditional service oriented architectures, including pitfalls and lessons learned along the way.
– Why and how to avoid overloading microservices with excessive RPC calls from event-driven streaming systems
– Best practices in common data flow patterns, such as shared state via RocksDB + Kafka Streams as well as the complementary tools in the Apache Ecosystem.
– The simplicity and power of streaming SQL with microservices
Kafka Streams State Stores Being Persistentconfluent
This document discusses Kafka Streams state stores. It provides examples of using different types of windowing (tumbling, hopping, sliding, session) with state stores. It also covers configuring state store logging, caching, and retention policies. The document demonstrates how to define windowed state stores in Kafka Streams applications and discusses concepts like grace periods.
A guide through the Azure Messaging services - Update ConferenceEldert Grootenboer
https://www.updateconference.net/en/2019/session/a-guide-through-the-azure-messaging-services
A guide through the Azure Messaging services - Update Conference
Building event-driven Microservices with Kafka EcosystemGuido Schmutz
This session will begin with a short recap of how we created systems over the past 20 years, up to the current idea of building systems, using a Microservices architecture. What is a Microservices architecture and how does it differ from a Service-Oriented Architecture? Should you use traditional REST APIs to integrate services with each each other in a Microservices Architecture? Or is it better to use a more loosely-coupled protocol? Answers to these and many other questions are provided. The talk will show how a distributed log (event hub) can help to create a central, persistent history of events and what benefits we achieve from doing so. Apache Kafka is a perfect match for building such an asynchronous, loosely-coupled event-driven backbone. Events trigger processing logic, which can be implemented in a more traditional as well as in a stream processing fashion. The talk shows the difference between a request-driven and event-driven communication and answers when to use which. It highlights how a modern stream processing system can be used to hold state both internally as well as in a database and how this state can be used to further increase independence of other services, the primary goal of a Microservices architecture.
CCT is a web app developed to help the project manager to have an overview of the transfers made by his team. It is a web app developed entirely in Python, HTML, css. I also used Flask to connect to the server.
CCT allowed to: add new transfers, show charts related to the types of costs and value, produce a PDF document to download, automatically calculate the sum of the costs made and look for new users on Github to cover missing skills.
Top 5 Event Streaming Use Cases for 2021 with Apache KafkaKai Wähner
Apache Kafka and Event Streaming are two of the most relevant buzzwords in tech these days. Ever wonder what the predicted TOP 5 Event Streaming Architectures and Use Cases for 2021 are? Check out the following presentation. Learn about edge deployments, hybrid and multi-cloud architectures, service mesh-based microservices, streaming machine learning, and cybersecurity.
On-demand video recording: https://videos.confluent.io/watch/XAjxV3j8hzwCcEKoZVErUJ
Event Driven Architecture with a RESTful Microservices Architecture (Kyle Ben...confluent
Tinder’s Quickfire Pipeline powers all things data at Tinder. It was originally built using AWS Kinesis Firehoses and has since been extended to use both Kafka and other event buses. It is the core of Tinder’s data infrastructure. This rich data flow of both client and backend data has been extended to service a variety of needs at Tinder, including Experimentation, ML, CRM, and Observability, allowing backend developers easier access to shared client side data. We perform this using many systems, including Kafka, Spark, Flink, Kubernetes, and Prometheus. Many of Tinder’s systems were natively designed in an RPC first architecture.
Things we’ll discuss decoupling your system at scale via event-driven architectures include:
– Powering ML, backend, observability, and analytical applications at scale, including an end to end walk through of our processes that allow non-programmers to write and deploy event-driven data flows.
– Show end to end the usage of dynamic event processing that creates other stream processes, via a dynamic control plane topology pattern and broadcasted state pattern
– How to manage the unavailability of cached data that would normally come from repeated API calls for data that’s being backfilled into Kafka, all online! (and why this is not necessarily a “good” idea)
– Integrating common OSS frameworks and libraries like Kafka Streams, Flink, Spark and friends to encourage the best design patterns for developers coming from traditional service oriented architectures, including pitfalls and lessons learned along the way.
– Why and how to avoid overloading microservices with excessive RPC calls from event-driven streaming systems
– Best practices in common data flow patterns, such as shared state via RocksDB + Kafka Streams as well as the complementary tools in the Apache Ecosystem.
– The simplicity and power of streaming SQL with microservices
Elastically Scaling Kafka Using Confluentconfluent
This document discusses how Confluent Platform provides elastic scaling for Apache Kafka. It offers fully managed cloud services through Confluent Cloud or self-managed software. Confluent Cloud allows users to easily scale Kafka workloads from 0 MBps to GBps without complex provisioning. It also offers pay-for-use pricing where customers only pay for the data streamed, with the ability to scale to zero. For self-managed deployments, Confluent Platform enables dynamic scaling of Kafka clusters on Kubernetes through features like tiered storage and self-balancing clusters that can rebalance partitions in seconds versus hours for other Kafka services.
Bridge to Cloud: Using Apache Kafka to Migrate to AWSconfluent
Watch this talk here: https://www.confluent.io/online-talks/bridge-to-cloud-apache-kafka-migrate-aws
Speakers: Priya Shivakumar, Director of Product, Confluent + Konstantine Karantasis, Software Engineer, Confluent + Rohit Pujari, Partner Solutions Architect, AWS
Most companies start their cloud journey with a new use case, or a new application. Sometimes these applications can run independently in the cloud, but often times they need data from the on premises datacenter. Existing applications will slowly migrate, but will need a strategy and the technology to enable a multi-year migration.
In this session, we will share how companies around the world are using Confluent Cloud, a fully managed Apache Kafka service, to migrate to AWS. By implementing a central-pipeline architecture using Apache Kafka to sync on-prem and cloud deployments, companies can accelerate migration times and reduce costs.
In this online talk we will cover:
•How to take the first step in migrating to AWS
•How to reliably sync your on premises applications using a persistent bridge to cloud
•Learn how Confluent Cloud can make this daunting task simple, reliable and performant
•See a demo of the hybrid-cloud and multi-region deployment of Apache Kafka
Should you use traditional REST APIs to bind services together? Or is it better to use a richer, more loosely-coupled protocol? This talk will dig into how we piece services together in event driven systems, how we use a distributed log (event hub) to create a central, persistent history of events and what benefits we achieve from doing so. Apache Kafka is a perfect match for building such an asynchronous, loosely-coupled event-driven backbone. Events trigger processing logic, which can be implemented in a more traditional as well as in a stream processing fashion. The talk will show the difference between a request-driven and event-driven communication and show when to use which. It highlights how the modern stream processing systems can be used to
hold state both internally as well as in a database and how this state can be used to further increase independence of other services, the primary goal of a Microservices architecture.
Serverless Kafka on AWS as Part of a Cloud-native Data Lake ArchitectureKai Wähner
AWS Data Lake / Lake House + Confluent Cloud for Serverless Apache Kafka. Learn about use cases, architectures, and features.
Data must be continuously collected, processed, and reactively used in applications across the entire enterprise - some in real time, some in batch mode. In other words: As an enterprise becomes increasingly software-defined, it needs a data platform designed primarily for "data in motion" rather than "data at rest."
Apache Kafka is now mainstream when it comes to data in motion! The Kafka API has become the de facto standard for event-driven architectures and event streaming. Unfortunately, the cost of running it yourself is very often too expensive when you add factors like scaling, administration, support, security, creating connectors...and everything else that goes with it. Resources in enterprises are scarce: this applies to both the best team members and the budget.
The cloud - as we all know - offers the perfect solution to such challenges.
Most likely, fully-managed cloud services such as AWS S3, DynamoDB or Redshift are already in use. Now it is time to implement "fully-managed" for Kafka as well - with Confluent Cloud on AWS.
Building a central integration layer that doesn't care where or how much data is coming from.
Implementing scalable data stream processing to gain real-time insights
Leveraging fully managed connectors (like S3, Redshift, Kinesis, MongoDB Atlas & more) to quickly access data
Confluent Cloud in action? Let's show how ao.com made it happen!
Translated with www.DeepL.com/Translator (free version)
Supply Chain Optimization with Apache KafkaKai Wähner
Supply Chain optimization leveraging Event Streaming with Apache Kafka. See real-world use cases and architectures from Walmart, BMW, Porsche, and other enterprises to improve the Supply Chain Management (SCM) processes. Automation, robustness, flexibility, real-time, decoupling, data integration, and hybrid deployments...
Video recording: https://youtu.be/dUkgungBmPs
Blog post: https://www.kai-waehner.de/apache-kafka-supply-chain-management-scm-optimization-scor-six-sigma-real-time
APAC Confluent Consumer Data Right the Lowdown and the Lessonsconfluent
The document discusses the Consumer Data Right (CDR) framework in Australia and lessons that can be learned from it. It provides an overview of the CDR, including that it applies to existing consumer data and requires data holders to share data with accredited third parties if authorized by consumers. It also notes the CDR will apply across multiple sectors starting with banking, energy, and telecommunications. The document also discusses some of the technical challenges of implementing CDR like maintaining a single customer view, tracking accredited parties, and ensuring data privacy and governance. It provides examples of how streaming data platforms like Apache Kafka can be used to power use cases enabled by CDR like customer and product 360-degree views, payments traceability, and open banking
More info: https://cnfl.io/cloud-native-experience-for-kafka-in-cloud | Neha Narkhede is co-founder and CTO at Confluent, a company backing the popular Apache Kafka messaging system. Prior to founding Confluent, Neha led streams infrastructure at LinkedIn, where she was responsible for LinkedIn’s streaming infrastructure built on top of Apache Kafka and Apache Samza. She is one of the initial authors of Apache Kafka and a committer and PMC member on the project.
Concepts and Patterns for Streaming Services with KafkaQAware GmbH
Cloud Native Night March 2020, Mainz: Talk by Perry Krol (@perkrol, Confluent)
=== Please download slides if blurred! ===
Abstract: Proven approaches such as service-oriented and event-driven architectures are joined by newer techniques such as microservices, reactive architectures, DevOps, and stream processing. Many of these patterns are successful by themselves, but they provide a more holistic and compelling approach when applied together. In this session Confluent will provide insights how service-based architectures and stream processing tools such as Apache Kafka® can help you build business-critical systems. You will learn why streaming beats request-response based architectures in complex, contemporary use cases, and explain why replayable logs such as Kafka provide a backbone for both service communication and shared datasets.
Based on these principles, we will explore how event collaboration and event sourcing patterns increase safety and recoverability with functional, event-driven approaches, apply patterns including Event Sourcing and CQRS, and how to build multi-team systems with microservices and SOA using patterns such as “inside out databases” and “event streams as a source of truth”.
IIoT with Kafka and Machine Learning for Supply Chain Optimization In Real Ti...Kai Wähner
I did a webinar with Confluent's partner Expero about "Apache Kafka and Machine Learning for Real Time Supply Chain Optimization". This is a great example for anybody in automation industry / Industrial IoT (IIoT) like automotive, manufacturing, logistics, etc.
We explain how a real time event streaming platform can integrate in real time with the legacy world and proprietary IIoT protocols (like Siemens S7, Modbus, Beckhoff ADS, OPC-UA, et al). You can process the data at scale and then ingest it into a modern database (like AWS S3, Snowflake or MongoDB) or analytic / machine learning framework (like TensorFlow, PyTorch or Azure Machine Learning Service).
Technical Deep Dive: Using Apache Kafka to Optimize Real-Time Analytics in Fi...confluent
Watch this talk here: https://www.confluent.io/online-talks/using-apache-kafka-to-optimize-real-time-analytics-financial-services-iot-applications
When it comes to the fast-paced nature of capital markets and IoT, the ability to analyze data in real time is critical to gaining an edge. It’s not just about the quantity of data you can analyze at once, it’s about the speed, scale, and quality of the data you have at your fingertips.
Modern streaming data technologies like Apache Kafka and the broader Confluent platform can help detect opportunities and threats in real time. They can improve profitability, yield, and performance. Combining Kafka with Panopticon visual analytics provides a powerful foundation for optimizing your operations.
Use cases in capital markets include transaction cost analysis (TCA), risk monitoring, surveillance of trading and trader activity, compliance, and optimizing profitability of electronic trading operations. Use cases in IoT include monitoring manufacturing processes, logistics, and connected vehicle telemetry and geospatial data.
This online talk will include in depth practical demonstrations of how Confluent and Panopticon together support several key applications. You will learn:
-Why Apache Kafka is widely used to improve performance of complex operational systems
-How Confluent and Panopticon open new opportunities to analyze operational data in real time
-How to quickly identify and react immediately to fast-emerging trends, clusters, and anomalies
-How to scale data ingestion and data processing
-Build new analytics dashboards in minutes
Building Event-Driven Applications with Apache Kafka & Confluent Platformconfluent
Watch this talk here: https://www.confluent.io/online-talks/building-event-driven-applications-apache-kafka-and-confluent-platform
Apache Kafka® has become the de facto technology for real-time event streaming. Confluent Platform, developed by the creators of Apache Kafka, is an event-streaming platform that enables the ingest and processing of massive amounts of data in real time.
In this session, we will cover the easiest ways to start developing event-driven applications with Apache Kafka using Confluent Platform. We will also demo a contextual event-driven application built using our ecosystem of connectors, REST proxy, and a variety of native clients.
View now to learn:
-How to create Apache Kafka topics in minutes and process event streams in real time
-Check the health of an Apache Kafka broker using Confluent Control Center
-The latest enhancements to Confluent Platform that make it easier to run Apache Kafka at scale
-How to use KSQL, streaming SQL for Apache Kafka, to process event streams in real time using simple SQL queries
Kafka as an Event Store (Guido Schmutz, Trivadis) Kafka Summit NYC 2019confluent
Event Sourcing and CQRS are two popular patterns for implementing a Microservices architectures. With Event Sourcing we do not store the state of an object, but instead store all the events impacting its state. Then to retrieve an object state, we have to read the different events related to a certain object and apply them one by one. CQRS (Command Query Responsibility Segregation) on the other hand is a way to dissociate writes (Command) and reads (Query). Event Sourcing and CQRS are frequently grouped and used together to form something bigger. While it is possible to implement CQRS without Event Sourcing, the opposite is not necessarily correct. In order to implement Event Sourcing, an efficient Event Store is needed. But is that also true when combining Event Sourcing and CQRS? And what is an event store in the first place and what features should it implement? This presentation will first discuss what functionalities an event store should offer and then present how Apache Kafka can be used to implement an event store. But is Kafka good enough or do specific event store solutions such as AxonDB or Event Store provide a better solution?
This document discusses how Spring Boot and Kafka can form a new enterprise platform for continuous delivery. It provides examples of companies like Netflix transitioning to using Spring Boot as their core Java framework. The document advocates building applications around event-driven microservices using Spring Boot, Kafka streams, and a streaming data platform to enable arbitrary scaling, multi-cloud capabilities, and continuous delivery across the enterprise.
1) Sam Vanhoutte discusses using Azure services like IoT Edge, IoT Hub, Stream Analytics, and Azure Databricks for real-time data analytics in IoT from edge to cloud.
2) A traffic camera scenario is presented where IoT Edge is used at the edge for tasks like license plate recognition while the cloud is used for analytics like detecting speeding tickets and suspicious vehicles.
3) Stream Analytics is used both at the edge and in the cloud to process streaming data in real-time while Azure Databricks is used for structured streaming and continuous aggregations using Apache Spark.
Build a Bridge to Cloud with Apache Kafka® for Data Analytics Cloud Servicesconfluent
Build a Bridge to Cloud with Apache Kafka® for Data Analytics Cloud Services, Perry Krol, Head of Systems Engineering, CEMEA, Confluent
https://www.meetup.com/Frankfurt-Apache-Kafka-Meetup-by-Confluent/events/269751169/
Kai Waehner [Confluent] | Real-Time Streaming Analytics with 100,000 Cars Usi...InfluxData
Kai Waehner [Confluent] | Real-Time Streaming Analytics with 100,000 Cars Using MQTT, Kafka and InfluxDB 2.0 on Kubernetes | InfluxDays Virtual Experience London 2020
The Heart of the Data Mesh Beats in Real-Time with Apache KafkaKai Wähner
If there were a buzzword of the hour, it would certainly be "data mesh"! This new architectural paradigm unlocks analytic data at scale and enables rapid access to an ever-growing number of distributed domain datasets for various usage scenarios.
As such, the data mesh addresses the most common weaknesses of the traditional centralized data lake or data platform architecture. And the heart of a data mesh infrastructure must be real-time, decoupled, reliable, and scalable.
This presentation explores how Apache Kafka, as an open and scalable decentralized real-time platform, can be the basis of a data mesh infrastructure and - complemented by many other data platforms like a data warehouse, data lake, and lakehouse - solve real business problems.
There is no silver bullet or single technology/product/cloud service for implementing a data mesh. The key outcome of a data mesh architecture is the ability to build data products; with the right tool for the job.
A good data mesh combines data streaming technology like Apache Kafka or Confluent Cloud with cloud-native data warehouse and data lake architectures from Snowflake, Databricks, Google BigQuery, et al.
Resilient Real-time Data Streaming across the Edge and Hybrid Cloud with Apac...Kai Wähner
Hybrid cloud architectures are the new black for most companies. A cloud-first strategy is evident for many new enterprise architectures, but some use cases require resiliency across edge sites and multiple cloud regions. Data streaming with the Apache Kafka ecosystem is a perfect technology for building resilient and hybrid real-time applications at any scale. This talk explores different architectures and their trade-offs for transactional and analytical workloads. Real-world examples include financial services, retail, and the automotive industry.
Video recording:
https://qconlondon.com/london2022/presentation/resilient-real-time-data-streaming-across-the-edge-and-hybrid-cloud
Elastically Scaling Kafka Using Confluentconfluent
This document discusses how Confluent Platform provides elastic scaling for Apache Kafka. It offers fully managed cloud services through Confluent Cloud or self-managed software. Confluent Cloud allows users to easily scale Kafka workloads from 0 MBps to GBps without complex provisioning. It also offers pay-for-use pricing where customers only pay for the data streamed, with the ability to scale to zero. For self-managed deployments, Confluent Platform enables dynamic scaling of Kafka clusters on Kubernetes through features like tiered storage and self-balancing clusters that can rebalance partitions in seconds versus hours for other Kafka services.
Bridge to Cloud: Using Apache Kafka to Migrate to AWSconfluent
Watch this talk here: https://www.confluent.io/online-talks/bridge-to-cloud-apache-kafka-migrate-aws
Speakers: Priya Shivakumar, Director of Product, Confluent + Konstantine Karantasis, Software Engineer, Confluent + Rohit Pujari, Partner Solutions Architect, AWS
Most companies start their cloud journey with a new use case, or a new application. Sometimes these applications can run independently in the cloud, but often times they need data from the on premises datacenter. Existing applications will slowly migrate, but will need a strategy and the technology to enable a multi-year migration.
In this session, we will share how companies around the world are using Confluent Cloud, a fully managed Apache Kafka service, to migrate to AWS. By implementing a central-pipeline architecture using Apache Kafka to sync on-prem and cloud deployments, companies can accelerate migration times and reduce costs.
In this online talk we will cover:
•How to take the first step in migrating to AWS
•How to reliably sync your on premises applications using a persistent bridge to cloud
•Learn how Confluent Cloud can make this daunting task simple, reliable and performant
•See a demo of the hybrid-cloud and multi-region deployment of Apache Kafka
Should you use traditional REST APIs to bind services together? Or is it better to use a richer, more loosely-coupled protocol? This talk will dig into how we piece services together in event driven systems, how we use a distributed log (event hub) to create a central, persistent history of events and what benefits we achieve from doing so. Apache Kafka is a perfect match for building such an asynchronous, loosely-coupled event-driven backbone. Events trigger processing logic, which can be implemented in a more traditional as well as in a stream processing fashion. The talk will show the difference between a request-driven and event-driven communication and show when to use which. It highlights how the modern stream processing systems can be used to
hold state both internally as well as in a database and how this state can be used to further increase independence of other services, the primary goal of a Microservices architecture.
Serverless Kafka on AWS as Part of a Cloud-native Data Lake ArchitectureKai Wähner
AWS Data Lake / Lake House + Confluent Cloud for Serverless Apache Kafka. Learn about use cases, architectures, and features.
Data must be continuously collected, processed, and reactively used in applications across the entire enterprise - some in real time, some in batch mode. In other words: As an enterprise becomes increasingly software-defined, it needs a data platform designed primarily for "data in motion" rather than "data at rest."
Apache Kafka is now mainstream when it comes to data in motion! The Kafka API has become the de facto standard for event-driven architectures and event streaming. Unfortunately, the cost of running it yourself is very often too expensive when you add factors like scaling, administration, support, security, creating connectors...and everything else that goes with it. Resources in enterprises are scarce: this applies to both the best team members and the budget.
The cloud - as we all know - offers the perfect solution to such challenges.
Most likely, fully-managed cloud services such as AWS S3, DynamoDB or Redshift are already in use. Now it is time to implement "fully-managed" for Kafka as well - with Confluent Cloud on AWS.
Building a central integration layer that doesn't care where or how much data is coming from.
Implementing scalable data stream processing to gain real-time insights
Leveraging fully managed connectors (like S3, Redshift, Kinesis, MongoDB Atlas & more) to quickly access data
Confluent Cloud in action? Let's show how ao.com made it happen!
Translated with www.DeepL.com/Translator (free version)
Supply Chain Optimization with Apache KafkaKai Wähner
Supply Chain optimization leveraging Event Streaming with Apache Kafka. See real-world use cases and architectures from Walmart, BMW, Porsche, and other enterprises to improve the Supply Chain Management (SCM) processes. Automation, robustness, flexibility, real-time, decoupling, data integration, and hybrid deployments...
Video recording: https://youtu.be/dUkgungBmPs
Blog post: https://www.kai-waehner.de/apache-kafka-supply-chain-management-scm-optimization-scor-six-sigma-real-time
APAC Confluent Consumer Data Right the Lowdown and the Lessonsconfluent
The document discusses the Consumer Data Right (CDR) framework in Australia and lessons that can be learned from it. It provides an overview of the CDR, including that it applies to existing consumer data and requires data holders to share data with accredited third parties if authorized by consumers. It also notes the CDR will apply across multiple sectors starting with banking, energy, and telecommunications. The document also discusses some of the technical challenges of implementing CDR like maintaining a single customer view, tracking accredited parties, and ensuring data privacy and governance. It provides examples of how streaming data platforms like Apache Kafka can be used to power use cases enabled by CDR like customer and product 360-degree views, payments traceability, and open banking
More info: https://cnfl.io/cloud-native-experience-for-kafka-in-cloud | Neha Narkhede is co-founder and CTO at Confluent, a company backing the popular Apache Kafka messaging system. Prior to founding Confluent, Neha led streams infrastructure at LinkedIn, where she was responsible for LinkedIn’s streaming infrastructure built on top of Apache Kafka and Apache Samza. She is one of the initial authors of Apache Kafka and a committer and PMC member on the project.
Concepts and Patterns for Streaming Services with KafkaQAware GmbH
Cloud Native Night March 2020, Mainz: Talk by Perry Krol (@perkrol, Confluent)
=== Please download slides if blurred! ===
Abstract: Proven approaches such as service-oriented and event-driven architectures are joined by newer techniques such as microservices, reactive architectures, DevOps, and stream processing. Many of these patterns are successful by themselves, but they provide a more holistic and compelling approach when applied together. In this session Confluent will provide insights how service-based architectures and stream processing tools such as Apache Kafka® can help you build business-critical systems. You will learn why streaming beats request-response based architectures in complex, contemporary use cases, and explain why replayable logs such as Kafka provide a backbone for both service communication and shared datasets.
Based on these principles, we will explore how event collaboration and event sourcing patterns increase safety and recoverability with functional, event-driven approaches, apply patterns including Event Sourcing and CQRS, and how to build multi-team systems with microservices and SOA using patterns such as “inside out databases” and “event streams as a source of truth”.
IIoT with Kafka and Machine Learning for Supply Chain Optimization In Real Ti...Kai Wähner
I did a webinar with Confluent's partner Expero about "Apache Kafka and Machine Learning for Real Time Supply Chain Optimization". This is a great example for anybody in automation industry / Industrial IoT (IIoT) like automotive, manufacturing, logistics, etc.
We explain how a real time event streaming platform can integrate in real time with the legacy world and proprietary IIoT protocols (like Siemens S7, Modbus, Beckhoff ADS, OPC-UA, et al). You can process the data at scale and then ingest it into a modern database (like AWS S3, Snowflake or MongoDB) or analytic / machine learning framework (like TensorFlow, PyTorch or Azure Machine Learning Service).
Technical Deep Dive: Using Apache Kafka to Optimize Real-Time Analytics in Fi...confluent
Watch this talk here: https://www.confluent.io/online-talks/using-apache-kafka-to-optimize-real-time-analytics-financial-services-iot-applications
When it comes to the fast-paced nature of capital markets and IoT, the ability to analyze data in real time is critical to gaining an edge. It’s not just about the quantity of data you can analyze at once, it’s about the speed, scale, and quality of the data you have at your fingertips.
Modern streaming data technologies like Apache Kafka and the broader Confluent platform can help detect opportunities and threats in real time. They can improve profitability, yield, and performance. Combining Kafka with Panopticon visual analytics provides a powerful foundation for optimizing your operations.
Use cases in capital markets include transaction cost analysis (TCA), risk monitoring, surveillance of trading and trader activity, compliance, and optimizing profitability of electronic trading operations. Use cases in IoT include monitoring manufacturing processes, logistics, and connected vehicle telemetry and geospatial data.
This online talk will include in depth practical demonstrations of how Confluent and Panopticon together support several key applications. You will learn:
-Why Apache Kafka is widely used to improve performance of complex operational systems
-How Confluent and Panopticon open new opportunities to analyze operational data in real time
-How to quickly identify and react immediately to fast-emerging trends, clusters, and anomalies
-How to scale data ingestion and data processing
-Build new analytics dashboards in minutes
Building Event-Driven Applications with Apache Kafka & Confluent Platformconfluent
Watch this talk here: https://www.confluent.io/online-talks/building-event-driven-applications-apache-kafka-and-confluent-platform
Apache Kafka® has become the de facto technology for real-time event streaming. Confluent Platform, developed by the creators of Apache Kafka, is an event-streaming platform that enables the ingest and processing of massive amounts of data in real time.
In this session, we will cover the easiest ways to start developing event-driven applications with Apache Kafka using Confluent Platform. We will also demo a contextual event-driven application built using our ecosystem of connectors, REST proxy, and a variety of native clients.
View now to learn:
-How to create Apache Kafka topics in minutes and process event streams in real time
-Check the health of an Apache Kafka broker using Confluent Control Center
-The latest enhancements to Confluent Platform that make it easier to run Apache Kafka at scale
-How to use KSQL, streaming SQL for Apache Kafka, to process event streams in real time using simple SQL queries
Kafka as an Event Store (Guido Schmutz, Trivadis) Kafka Summit NYC 2019confluent
Event Sourcing and CQRS are two popular patterns for implementing a Microservices architectures. With Event Sourcing we do not store the state of an object, but instead store all the events impacting its state. Then to retrieve an object state, we have to read the different events related to a certain object and apply them one by one. CQRS (Command Query Responsibility Segregation) on the other hand is a way to dissociate writes (Command) and reads (Query). Event Sourcing and CQRS are frequently grouped and used together to form something bigger. While it is possible to implement CQRS without Event Sourcing, the opposite is not necessarily correct. In order to implement Event Sourcing, an efficient Event Store is needed. But is that also true when combining Event Sourcing and CQRS? And what is an event store in the first place and what features should it implement? This presentation will first discuss what functionalities an event store should offer and then present how Apache Kafka can be used to implement an event store. But is Kafka good enough or do specific event store solutions such as AxonDB or Event Store provide a better solution?
This document discusses how Spring Boot and Kafka can form a new enterprise platform for continuous delivery. It provides examples of companies like Netflix transitioning to using Spring Boot as their core Java framework. The document advocates building applications around event-driven microservices using Spring Boot, Kafka streams, and a streaming data platform to enable arbitrary scaling, multi-cloud capabilities, and continuous delivery across the enterprise.
1) Sam Vanhoutte discusses using Azure services like IoT Edge, IoT Hub, Stream Analytics, and Azure Databricks for real-time data analytics in IoT from edge to cloud.
2) A traffic camera scenario is presented where IoT Edge is used at the edge for tasks like license plate recognition while the cloud is used for analytics like detecting speeding tickets and suspicious vehicles.
3) Stream Analytics is used both at the edge and in the cloud to process streaming data in real-time while Azure Databricks is used for structured streaming and continuous aggregations using Apache Spark.
Build a Bridge to Cloud with Apache Kafka® for Data Analytics Cloud Servicesconfluent
Build a Bridge to Cloud with Apache Kafka® for Data Analytics Cloud Services, Perry Krol, Head of Systems Engineering, CEMEA, Confluent
https://www.meetup.com/Frankfurt-Apache-Kafka-Meetup-by-Confluent/events/269751169/
Kai Waehner [Confluent] | Real-Time Streaming Analytics with 100,000 Cars Usi...InfluxData
Kai Waehner [Confluent] | Real-Time Streaming Analytics with 100,000 Cars Using MQTT, Kafka and InfluxDB 2.0 on Kubernetes | InfluxDays Virtual Experience London 2020
The Heart of the Data Mesh Beats in Real-Time with Apache KafkaKai Wähner
If there were a buzzword of the hour, it would certainly be "data mesh"! This new architectural paradigm unlocks analytic data at scale and enables rapid access to an ever-growing number of distributed domain datasets for various usage scenarios.
As such, the data mesh addresses the most common weaknesses of the traditional centralized data lake or data platform architecture. And the heart of a data mesh infrastructure must be real-time, decoupled, reliable, and scalable.
This presentation explores how Apache Kafka, as an open and scalable decentralized real-time platform, can be the basis of a data mesh infrastructure and - complemented by many other data platforms like a data warehouse, data lake, and lakehouse - solve real business problems.
There is no silver bullet or single technology/product/cloud service for implementing a data mesh. The key outcome of a data mesh architecture is the ability to build data products; with the right tool for the job.
A good data mesh combines data streaming technology like Apache Kafka or Confluent Cloud with cloud-native data warehouse and data lake architectures from Snowflake, Databricks, Google BigQuery, et al.
Resilient Real-time Data Streaming across the Edge and Hybrid Cloud with Apac...Kai Wähner
Hybrid cloud architectures are the new black for most companies. A cloud-first strategy is evident for many new enterprise architectures, but some use cases require resiliency across edge sites and multiple cloud regions. Data streaming with the Apache Kafka ecosystem is a perfect technology for building resilient and hybrid real-time applications at any scale. This talk explores different architectures and their trade-offs for transactional and analytical workloads. Real-world examples include financial services, retail, and the automotive industry.
Video recording:
https://qconlondon.com/london2022/presentation/resilient-real-time-data-streaming-across-the-edge-and-hybrid-cloud
Digital Business Transformation in the Streaming EraAttunity
Enterprises are rapidly adopting stream computing backbones, in-memory data stores, change data capture, and other low-latency approaches for end-to-end applications. As businesses modernize their data architectures over the next several years, they will begin to evolve toward all-streaming architectures. In this webcast, Wikibon, Attunity, and MemSQL will discuss how enterprise data professionals should migrate their legacy architectures in this direction. They will provide guidance for migrating data lakes, data warehouses, data governance, and transactional databases to support all-streaming architectures for complex cloud and edge applications. They will discuss how this new architecture will drive enterprise strategies for operationalizing artificial intelligence, mobile computing, the Internet of Things, and cloud-native microservices.
Link to the Wikibon report - wikibon.com/wikibons-2018-big-data-analytics-trends-forecast
Link to Attunity Streaming CDC Book Download - http://www.bit.ly/cdcbook
Link to MemSQL's Free Data Pipeline Book - http://go.memsql.com/oreilly-data-pipelines
IoT and Event Streaming at Scale with Apache Kafkaconfluent
This document discusses IoT architectures for Apache Kafka and event streaming. It begins with an overview of use cases for consumer IoT and industrial IoT. It then covers event streaming with Apache Kafka, including its suitability for real-time processing. Several IoT architecture patterns are presented, such as deploying Kafka at the edge or in hybrid edge-cloud environments. A live demo of a connected car infrastructure using Kafka, MQTT and TensorFlow is also proposed. The document concludes by discussing the benefits of using Confluent Platform for Kafka deployments.
IoT Architectures for Apache Kafka and Event Streaming - Industry 4.0, Digita...Kai Wähner
The Internet of Things (IoT) is getting more and more traction as valuable use cases come to light. Whether you are in Healthcare, Telecommunications, Manufacturing, Banking or Retail to name a few industries, there is one key challenge and that's the integration of backend IoT data logs and applications, business services and cloud services to process the data in real time and at scale.
In this talk, we will be sharing how Kafka has become the leading technology used throughout the business to provide Real Time Event Streaming. Explore real life use cases of Kafka Connect, Kafka Streams and KSQL independent of the data deployment be it on a private or public Cloud, On Premise or at the Edge.
Audi - Connected car infrastructure
Robert Bosch Power Tools - Track and Trace of devices and people at construction areas
Deutsche Bahn - Customer 360 for train timetable updates
E.ON - IoT Streaming Platform to integrate and build smart home, smart building and smart grid infrastructures
Mit Streaming die Brücken zum Erfolg bauenconfluent
Mit Streaming die Brücken zum Erfolg bauen
Henrik Berner of Mercedes-Benz discusses how the company built an event-driven architecture using Apache Kafka to enable seamless 360-degree data flow. Mercedes-Benz deployed a Kafka platform in 2018 and it now supports over 70 systems across divisions. The platform provides features like connectors, streams, and schema registry. It is used for data lake streaming, 360 customer data replication between on-premises and cloud clusters, and synchronizing changes in near real-time. The event-driven approach reduced ETL processes and complex data formats while enabling permanent data completion from multiple sources.
Most data visualisation solutions today still work on data sources which are stored persistently in a data store, using the so called “data at rest” paradigms. More and more data sources today provide a constant stream of data, from IoT devices to Social Media streams. These data stream publish with high velocity and messages often have to be processed as quick as possible. For the processing and analytics on the data, so called stream processing solutions are available. But these only provide minimal or no visualisation capabilities. One was is to first persist the data into a data store and then use a traditional data visualisation solution to present the data.
If latency is not an issue, such a solution might be good enough. An other question is which data store solution is necessary to keep up with the high load on write and read. If it is not an RDBMS but an NoSQL database, then not all traditional visualisation tools might already integrate with the specific data store. An other option is to use a Streaming Visualisation solution. They are specially built for streaming data and often do not support batch data. A much better solution would be to have one tool capable of handling both, batch and streaming data. This talk presents different architecture blueprints for integrating data visualisation into a fast data solution and highlights some of the products available to implement these blueprints.
Apache Kafka® and Analytics in a Connected IoT Worldconfluent
Apache Kafka® and Analytics in a Connected IoT World, Kai Waehner, Sr. Solutions Engineer Advanced Technology Group, Confluent
https://www.meetup.com/Berlin-Apache-Kafka-Meetup-by-Confluent/events/273166575/
Fast Data – Fast Cars: Wie Apache Kafka die Datenwelt revolutioniertconfluent
Für die Automobilindustrie ist die digitale Transformation wie für jede andere Branche zugleich eine digitale Revolution: Neue Marktspieler, neue Technologien und die in immer größeren Mengen anfallenden Daten schaffen neue Chancen, aber auch neue Herausforderungen – und erfordern neben neuen IT-Architekturen auch völlig neue Denkansätze.
60% der Fortune500-Unternehmen setzen zur Umsetzung ihrer Daten-Streaming-Projekte auf die umfassende verteilte Streaming-Plattform Apache Kafka®, darunter auch die AUDI AG.
Erfahren Sie in diesem Webinar:
Wie Kafka als Grundlage sowohl für Daten-Pipelines als auch für Anwendungen dient, die Echtzeit-Datenströme konsumieren und verarbeiten.
Wie Kafka Connect und Kafka Streams geschäftskritische Anwendungen unterstützt
Wie Audi mithilfe von Kafka und Confluent eine Fast Data IoT-Plattform umgesetzt hat, die den Bereich „Connected Car“ revolutioniert
Sprecher:
David Schmitz, Principal Architect, Audi Electronics Venture GmbH
Kai Waehner, Technology Evangelist, Confluent
Apache Kafka as Data Hub for Crypto, NFT, Metaverse (Beyond the Buzz!)Kai Wähner
Decentralized finance with crypto and NFTs is a huge topic these days. It becomes a powerful combination with the coming metaverse platforms across industries. This session explores the relationship between crypto technologies and modern enterprise architecture.
I discuss how data streaming and Apache Kafka help build innovation and scalable real-time applications of a future metaverse. Let's skip the buzz (and NFT bubble) and instead review existing real-world deployments in the crypto and blockchain world powered by Kafka and its ecosystem.
Kappa vs Lambda Architectures and Technology ComparisonKai Wähner
Real-time data beats slow data. That’s true for almost every use case. Nevertheless, enterprise architects build new infrastructures with the Lambda architecture that includes separate batch and real-time layers.
This video explores why a single real-time pipeline, called Kappa architecture, is the better fit for many enterprise architectures. Real-world examples from companies such as Disney, Shopify, Uber, and Twitter explore the benefits of Kappa but also show how batch processing fits into this discussion positively without the need for a Lambda architecture.
The main focus of the discussion is on Apache Kafka (and its ecosystem) as the de facto standard for event streaming to process data in motion (the key concept of Kappa), but the video also compares various technologies and vendors such as Confluent, Cloudera, IBM Red Hat, Apache Flink, Apache Pulsar, AWS Kinesis, Amazon MSK, Azure Event Hubs, Google Pub Sub, and more.
Video recording of this presentation:
https://youtu.be/j7D29eyysDw
Further reading:
https://www.kai-waehner.de/blog/2021/09/23/real-time-kappa-architecture-mainstream-replacing-batch-lambda/
https://www.kai-waehner.de/blog/2021/04/20/comparison-open-source-apache-kafka-vs-confluent-cloudera-red-hat-amazon-msk-cloud/
https://www.kai-waehner.de/blog/2021/05/09/kafka-api-de-facto-standard-event-streaming-like-amazon-s3-object-storage/
This document outlines an agenda for a webinar on building secure, event-driven microservices with Confluent Cloud on AWS. The agenda includes presentations on building modern streaming analytics with Confluent on AWS, event streaming made easy with Confluent, and a lab on building end-to-end streaming data pipelines with Confluent Cloud. The hosts for the webinar are Ahmed Zamzam from Confluent and Nuno Barreto from AWS.
Build real-time streaming data pipelines to AWS with Confluentconfluent
Traditional data pipelines often face scalability issues and challenges related to cost, their monolithic design, and reliance on batch data processing. They also typically operate under the premise that all data needs to be stored in a single centralized data source before it's put to practical use. Confluent Cloud on Amazon Web Services (AWS) provides a fully managed cloud-native platform that helps you simplify the way you build real-time data flows using streaming data pipelines and Apache Kafka.
Data Warehouse vs. Data Lake vs. Data Streaming – Friends, Enemies, Frenemies?Kai Wähner
The concepts and architectures of a data warehouse, a data lake, and data streaming are complementary to solving business problems.
Unfortunately, the underlying technologies are often misunderstood, overused for monolithic and inflexible architectures, and pitched for wrong use cases by vendors. Let’s explore this dilemma in a presentation.
The slides cover technologies such as Apache Kafka, Apache Spark, Confluent, Databricks, Snowflake, Elasticsearch, AWS Redshift, GCP with Google Bigquery, and Azure Synapse.
Best Practices for Streaming IoT Data with MQTT and Apache KafkaKai Wähner
Organizations today are looking to stream IoT data to Apache Kafka. However, connecting tens of thousands or even millions of devices over unreliable networks can create some architecture challenges. In this session, we will identify and demo some best practices for implementing a large scale IoT system that can stream MQTT messages to Apache Kafka.
We use HiveMQ as open source MQTT broker to ingest data from IoT devices, ingest the data in real time into an Apache Kafka cluster for preprocessing (using Kafka Streams / KSQL), and model training + inference (using TensorFlow 2.0 and its TensorFlow I/O Kafka plugin).
We leverage additional enterprise components from HiveMQ and Confluent to allow easy operations, scalability and monitoring.
Viele Autos, noch mehr Daten: IoT-Daten-Streaming mit MQTT & Kafka (Kai Waehn...confluent
This document discusses best practices for streaming IoT data with MQTT and Apache Kafka. It begins with an overview of a use case involving a global automotive company building a connected car infrastructure. An architecture is presented showing how sensor data from cars can be ingested via MQTT into Apache Kafka and then processed using tools like Kafka Streams, TensorFlow, and ElasticSearch for analytics and alerts. A live demo is described that implements this full pipeline. The document concludes with a discussion of best practices around choosing the right tools, separation of concerns, data types, and next steps.
IoT Architectures for a Digital Twin with Apache Kafka, IoT Platforms and Mac...Kai Wähner
A digital twin is a digital replica of a living or non-living physical entity. This session discusses the benefits and IoT architectures of a Digital Twin in Industrial IoT (IIoT) and its relation to Apache Kafka, IoT frameworks and Machine Learning. Kafka is often used as central event streaming platform to build a scalable and reliable digital twin for real time streaming sensor data. A live demo shows a scalable digital twin infrastructure for condition monitoring and predictive maintenance in real time for a connected car infrastructure leveraging Kafka, MQTT and TensorFlow.
Key Take-Aways:
• Learn about use cases and characteristics of a digital twin in various industries
• Understand how to build a digital twin for every single (of tens of thousands) IoT device or machine
• See different IoT architectures with Kafka and other IoT technologies and products, including edge, hybrid and global deployments
• Understand the relation to Machine Learning and bring added value to your IoT infrastructure by enabling use cases like predictive maintenance
• Understand how the Apache Kafka enables scalable and flexible end-to-end integration processing from IIoT data to various backend applications
• Watch a live demo of an end-to-end integration, real time processing and analytics of thousands of IoT devices
More details:
https://www.kai-waehner.de/blog/2019/11/28/apache-kafka-industrial-iot-iiot-build-an-open-scalable-reliable-digital-twin/
https://www.kai-waehner.de/blog/2020/03/25/architectures-digital-twin-digital-thread-apache-kafka-iot-platforms-machine-learning/
https://youtu.be/Q3eKPEVwNVY
Apache Kafka in the Automotive Industry (Connected Vehicles, Manufacturing 4....Kai Wähner
Connect all the things: An intro to event streaming for the automotive industry including connected cars, mobility services, and manufacturing / industrial IoT.
Video recording of this talk: https://www.youtube.com/watch?v=rBfBFrcO-WU
The Fourth Industrial Revolution (also known as Industry 4.0) is the ongoing automation of traditional manufacturing and industrial practices, using modern smart technology. Event Streaming with Apache Kafka plays a massive role in processing massive volumes of data in real-time in a reliable, scalable, and flexible way using integrating with various legacy and modern data sources and sinks.
Other industries—retail, healthcare, government, financial services, energy, and more—also lean into Industry 4.0 technology to take advantage of IoT devices, sensors, smart machines, robotics, and connected data. The variety of these deployments goes from disconnected edge use cases across hybrid architectures to global multi-cloud deployments.
In this presentation, I want to give you an overview of existing use cases for event streaming technology in a connected world across supply chains, industries and customer experiences that come along with these interdisciplinary data intersections:
- The Automotive Industry (and it’s not only Connected Cars)
- Mobility Services across verticals (transportation, logistics, travel industry, retailing, …)
- Smart Cities (including citizen health services, communication infrastructure, …)
Real-world examples include use cases from car makers such as Audi, BMW, Porsche, Tesla, plus many examples from mobility services such as Uber, Lyft, Here Technologies, and more.
Data Streaming with Apache Kafka & MongoDBconfluent
Explore the use-cases and architecture for Apache Kafka, and how it integrates with MongoDB to build sophisticated data-driven applications that exploit new sources of data.
Blueprint Series: Architecture Patterns for Implementing Serverless Microserv...Matt Stubbs
Richard Freeman talks about how the data science team at JustGiving built KOALA, a fully serverless stack for real-time web analytics capture, stream processing, metrics API, and storage service, supporting live data at scale from over 26M users. He discusses recent advances in serverless computing, and how you can implement traditionally container-based microservice patterns using serverless-based architectures instead. Deploying Serverless in your organisation can dramatically increase the delivery speed, productivity and flexibility of the development team, while reducing the overall running, DevOps and maintenance costs.
Similar to Set Your Data In Motion - CTO Roundtable (20)
Building API data products on top of your real-time data infrastructureconfluent
This talk and live demonstration will examine how Confluent and Gravitee.io integrate to unlock value from streaming data through API products.
You will learn how data owners and API providers can document, secure data products on top of Confluent brokers, including schema validation, topic routing and message filtering.
You will also see how data and API consumers can discover and subscribe to products in a developer portal, as well as how they can integrate with Confluent topics through protocols like REST, Websockets, Server-sent Events and Webhooks.
Whether you want to monetize your real-time data, enable new integrations with partners, or provide self-service access to topics through various protocols, this webinar is for you!
Catch the Wave: SAP Event-Driven and Data Streaming for the Intelligence Ente...confluent
In our exclusive webinar, you'll learn why event-driven architecture is the key to unlocking cost efficiency, operational effectiveness, and profitability. Gain insights on how this approach differs from API-driven methods and why it's essential for your organization's success.
Santander Stream Processing with Apache Flinkconfluent
Flink is becoming the de facto standard for stream processing due to its scalability, performance, fault tolerance, and language flexibility. It supports stream processing, batch processing, and analytics through one unified system. Developers choose Flink for its robust feature set and ability to handle stream processing workloads at large scales efficiently.
Unlocking the Power of IoT: A comprehensive approach to real-time insightsconfluent
In today's data-driven world, the Internet of Things (IoT) is revolutionizing industries and unlocking new possibilities. Join Data Reply, Confluent, and Imply as we unveil a comprehensive solution for IoT that harnesses the power of real-time insights.
Workshop híbrido: Stream Processing con Flinkconfluent
El Stream processing es un requisito previo de la pila de data streaming, que impulsa aplicaciones y pipelines en tiempo real.
Permite una mayor portabilidad de datos, una utilización optimizada de recursos y una mejor experiencia del cliente al procesar flujos de datos en tiempo real.
En nuestro taller práctico híbrido, aprenderás cómo filtrar, unir y enriquecer fácilmente datos en tiempo real dentro de Confluent Cloud utilizando nuestro servicio Flink sin servidor.
Industry 4.0: Building the Unified Namespace with Confluent, HiveMQ and Spark...confluent
Our talk will explore the transformative impact of integrating Confluent, HiveMQ, and SparkPlug in Industry 4.0, emphasizing the creation of a Unified Namespace.
In addition to the creation of a Unified Namespace, our webinar will also delve into Stream Governance and Scaling, highlighting how these aspects are crucial for managing complex data flows and ensuring robust, scalable IIoT-Platforms.
You will learn how to ensure data accuracy and reliability, expand your data processing capabilities, and optimize your data management processes.
Don't miss out on this opportunity to learn from industry experts and take your business to the next level.
La arquitectura impulsada por eventos (EDA) será el corazón del ecosistema de MAPFRE. Para seguir siendo competitivas, las empresas de hoy dependen cada vez más del análisis de datos en tiempo real, lo que les permite obtener información y tiempos de respuesta más rápidos. Los negocios con datos en tiempo real consisten en tomar conciencia de la situación, detectar y responder a lo que está sucediendo en el mundo ahora.
Eventos y Microservicios - Santander TechTalkconfluent
Durante esta sesión examinaremos cómo el mundo de los eventos y los microservicios se complementan y mejoran explorando cómo los patrones basados en eventos nos permiten descomponer monolitos de manera escalable, resiliente y desacoplada.
Q&A with Confluent Experts: Navigating Networking in Confluent Cloudconfluent
This document discusses networking options and best practices for Confluent Cloud. It provides an overview of public endpoints, private link, and peering options. It then discusses best practices for private networking architectures on Azure using hub-and-spoke and private link designs. Finally, it addresses networking considerations and challenges for Kafka Connect managed connectors, as well as planned enhancements for DNS peering and outbound private link support.
Purpose of the session is to have a dive into Apache, Kafka, Data Streaming and Kafka in the cloud
- Dive into Apache Kafka
- Data Streaming
- Kafka in the cloud
Q&A with Confluent Professional Services: Confluent Service Meshconfluent
No matter whether you are migrating your Kafka cluster to Confluent Cloud, running a cloud-hybrid environment or are in a different situation where data protection and encryption of sensitive information is required, Confluent Service Mesh allows you to transparently encrypt your data without the need to make code changes to you existing applications.
Citi Tech Talk: Event Driven Kafka Microservicesconfluent
Microservices have become a dominant architectural paradigm for building systems in the enterprise, but they are not without their tradeoffs. Learn how to build event-driven microservices with Apache Kafka
Confluent & GSI Webinars series - Session 3confluent
An in depth look at how Confluent is being used in the financial services industry. Gain an understanding of how organisations are utilising data in motion to solve common problems and gain benefits from their real time data capabilities.
It will look more deeply into some specific use cases and show how Confluent technology is used to manage costs and mitigate risks.
This session is aimed at Solutions Architects, Sales Engineers and Pre Sales, and also the more technically minded business aligned people. Whilst this is not a deeply technical session, a level of knowledge around Kafka would be helpful.
This document discusses moving to an event-driven architecture using Confluent. It begins by outlining some of the limitations of traditional messaging middleware approaches. Confluent provides benefits like stream processing, persistence, scalability and reliability while avoiding issues like lack of structure, slow consumers, and technical debt. The document then discusses how Confluent can help modernize architectures, enable new real-time use cases, and reduce costs through migration. It provides examples of how companies like Advance Auto Parts and Nord/LB have benefitted from implementing Confluent platforms.
This session will show why the old paradigm does not work and that a new approach to the data strategy needs to be taken. It aims to show how a Data Streaming Platform is integral to the evolution of a company’s data strategy and how Confluent is not just an integration layer but the central nervous system for an organisation
Vous apprendrez également à :
• Créer plus rapidement des produits et fonctionnalités à l’aide d’une suite complète de connecteurs et d’outils de gestion des flux, et à connecter vos environnements à des pipelines de données
• Protéger vos données et charges de travail les plus critiques grâce à des garanties intégrées en matière de sécurité, de gouvernance et de résilience
• Déployer Kafka à grande échelle en quelques minutes tout en réduisant les coûts et la charge opérationnelle associés
Confluent Partner Tech Talk with Synthesisconfluent
A discussion on the arduous planning process, and deep dive into the design/architectural decisions.
Learn more about the networking, RBAC strategies, the automation, and the deployment plan.
Secure-by-Design Using Hardware and Software Protection for FDA ComplianceICS
This webinar explores the “secure-by-design” approach to medical device software development. During this important session, we will outline which security measures should be considered for compliance, identify technical solutions available on various hardware platforms, summarize hardware protection methods you should consider when building in security and review security software such as Trusted Execution Environments for secure storage of keys and data, and Intrusion Detection Protection Systems to monitor for threats.
Superpower Your Apache Kafka Applications Development with Complementary Open...Paul Brebner
Kafka Summit talk (Bangalore, India, May 2, 2024, https://events.bizzabo.com/573863/agenda/session/1300469 )
Many Apache Kafka use cases take advantage of Kafka’s ability to integrate multiple heterogeneous systems for stream processing and real-time machine learning scenarios. But Kafka also exists in a rich ecosystem of related but complementary stream processing technologies and tools, particularly from the open-source community. In this talk, we’ll take you on a tour of a selection of complementary tools that can make Kafka even more powerful. We’ll focus on tools for stream processing and querying, streaming machine learning, stream visibility and observation, stream meta-data, stream visualisation, stream development including testing and the use of Generative AI and LLMs, and stream performance and scalability. By the end you will have a good idea of the types of Kafka “superhero” tools that exist, which are my favourites (and what superpowers they have), and how they combine to save your Kafka applications development universe from swamploads of data stagnation monsters!
Mobile App Development Company In Noida | Drona InfotechDrona Infotech
React.js, a JavaScript library developed by Facebook, has gained immense popularity for building user interfaces, especially for single-page applications. Over the years, React has evolved and expanded its capabilities, becoming a preferred choice for mobile app development. This article will explore why React.js is an excellent choice for the Best Mobile App development company in Noida.
Visit Us For Information: https://www.linkedin.com/pulse/what-makes-reactjs-stand-out-mobile-app-development-rajesh-rai-pihvf/
Stork Product Overview: An AI-Powered Autonomous Delivery FleetVince Scalabrino
Imagine a world where instead of blue and brown trucks dropping parcels on our porches, a buzzing drove of drones delivered our goods. Now imagine those drones are controlled by 3 purpose-built AI designed to ensure all packages were delivered as quickly and as economically as possible That's what Stork is all about.
The Comprehensive Guide to Validating Audio-Visual Performances.pdfkalichargn70th171
Ensuring the optimal performance of your audio-visual (AV) equipment is crucial for delivering exceptional experiences. AV performance validation is a critical process that verifies the quality and functionality of your AV setup. Whether you're a content creator, a business conducting webinars, or a homeowner creating a home theater, validating your AV performance is essential.
Baha Majid WCA4Z IBM Z Customer Council Boston June 2024.pdfBaha Majid
IBM watsonx Code Assistant for Z, our latest Generative AI-assisted mainframe application modernization solution. Mainframe (IBM Z) application modernization is a topic that every mainframe client is addressing to various degrees today, driven largely from digital transformation. With generative AI comes the opportunity to reimagine the mainframe application modernization experience. Infusing generative AI will enable speed and trust, help de-risk, and lower total costs associated with heavy-lifting application modernization initiatives. This document provides an overview of the IBM watsonx Code Assistant for Z which uses the power of generative AI to make it easier for developers to selectively modernize COBOL business services while maintaining mainframe qualities of service.
Penify - Let AI do the Documentation, you write the Code.KrishnaveniMohan1
Penify automates the software documentation process for Git repositories. Every time a code modification is merged into "main", Penify uses a Large Language Model to generate documentation for the updated code. This automation covers multiple documentation layers, including InCode Documentation, API Documentation, Architectural Documentation, and PR documentation, each designed to improve different aspects of the development process. By taking over the entire documentation process, Penify tackles the common problem of documentation becoming outdated as the code evolves.
https://www.penify.dev/
Streamlining End-to-End Testing Automation with Azure DevOps Build & Release Pipelines
Automating end-to-end (e2e) test for Android and iOS native apps, and web apps, within Azure build and release pipelines, poses several challenges. This session dives into the key challenges and the repeatable solutions implemented across multiple teams at a leading Indian telecom disruptor, renowned for its affordable 4G/5G services, digital platforms, and broadband connectivity.
Challenge #1. Ensuring Test Environment Consistency: Establishing a standardized test execution environment across hundreds of Azure DevOps agents is crucial for achieving dependable testing results. This uniformity must seamlessly span from Build pipelines to various stages of the Release pipeline.
Challenge #2. Coordinated Test Execution Across Environments: Executing distinct subsets of tests using the same automation framework across diverse environments, such as the build pipeline and specific stages of the Release Pipeline, demands flexible and cohesive approaches.
Challenge #3. Testing on Linux-based Azure DevOps Agents: Conducting tests, particularly for web and native apps, on Azure DevOps Linux agents lacking browser or device connectivity presents specific challenges in attaining thorough testing coverage.
This session delves into how these challenges were addressed through:
1. Automate the setup of essential dependencies to ensure a consistent testing environment.
2. Create standardized templates for executing API tests, API workflow tests, and end-to-end tests in the Build pipeline, streamlining the testing process.
3. Implement task groups in Release pipeline stages to facilitate the execution of tests, ensuring consistency and efficiency across deployment phases.
4. Deploy browsers within Docker containers for web application testing, enhancing portability and scalability of testing environments.
5. Leverage diverse device farms dedicated to Android, iOS, and browser testing to cover a wide range of platforms and devices.
6. Integrate AI technology, such as Applitools Visual AI and Ultrafast Grid, to automate test execution and validation, improving accuracy and efficiency.
7. Utilize AI/ML-powered central test automation reporting server through platforms like reportportal.io, providing consolidated and real-time insights into test performance and issues.
These solutions not only facilitate comprehensive testing across platforms but also promote the principles of shift-left testing, enabling early feedback, implementing quality gates, and ensuring repeatability. By adopting these techniques, teams can effectively automate and execute tests, accelerating software delivery while upholding high-quality standards across Android, iOS, and web applications.
Photoshop Tutorial for Beginners (2024 Edition)alowpalsadig
Photoshop Tutorial for Beginners (2024 Edition)
Explore the evolution of programming and software development and design in 2024. Discover emerging trends shaping the future of coding in our insightful analysis."
Here's an overview:Introduction: The Evolution of Programming and Software DevelopmentThe Rise of Artificial Intelligence and Machine Learning in CodingAdopting Low-Code and No-Code PlatformsQuantum Computing: Entering the Software Development MainstreamIntegration of DevOps with Machine Learning: MLOpsAdvancements in Cybersecurity PracticesThe Growth of Edge ComputingEmerging Programming Languages and FrameworksSoftware Development Ethics and AI RegulationSustainability in Software EngineeringThe Future Workforce: Remote and Distributed TeamsConclusion: Adapting to the Changing Software Development LandscapeIntroduction: The Evolution of Programming and Software Development
Photoshop Tutorial for Beginners (2024 Edition)Explore the evolution of programming and software development and design in 2024. Discover emerging trends shaping the future of coding in our insightful analysis."Here's an overview:Introduction: The Evolution of Programming and Software DevelopmentThe Rise of Artificial Intelligence and Machine Learning in CodingAdopting Low-Code and No-Code PlatformsQuantum Computing: Entering the Software Development MainstreamIntegration of DevOps with Machine Learning: MLOpsAdvancements in Cybersecurity PracticesThe Growth of Edge ComputingEmerging Programming Languages and FrameworksSoftware Development Ethics and AI RegulationSustainability in Software EngineeringThe Future Workforce: Remote and Distributed TeamsConclusion: Adapting to the Changing Software Development LandscapeIntroduction: The Evolution of Programming and Software Development
The importance of developing and designing programming in 2024
Programming design and development represents a vital step in keeping pace with technological advancements and meeting ever-changing market needs. This course is intended for anyone who wants to understand the fundamental importance of software development and design, whether you are a beginner or a professional seeking to update your knowledge.
Course objectives:
1. **Learn about the basics of software development:
- Understanding software development processes and tools.
- Identify the role of programmers and designers in software projects.
2. Understanding the software design process:
- Learn about the principles of good software design.
- Discussing common design patterns such as Object-Oriented Design.
3. The importance of user experience (UX) in modern software:
- Explore how user experience can improve software acceptance and usability.
- Tools and techniques to analyze and improve user experience.
4. Increase efficiency and productivity through modern development tools:
- Access to the latest programming tools and languages used in the industry.
- Study live examples of applications
Transforming Product Development using OnePlan To Boost Efficiency and Innova...OnePlan Solutions
Ready to overcome challenges and drive innovation in your organization? Join us in our upcoming webinar where we discuss how to combat resource limitations, scope creep, and the difficulties of aligning your projects with strategic goals. Discover how OnePlan can revolutionize your product development processes, helping your team to innovate faster, manage resources more effectively, and deliver exceptional results.
Ensuring Efficiency and Speed with Practical Solutions for Clinical OperationsOnePlan Solutions
Clinical operations professionals encounter unique challenges. Balancing regulatory requirements, tight timelines, and the need for cross-functional collaboration can create significant internal pressures. Our upcoming webinar will introduce key strategies and tools to streamline and enhance clinical development processes, helping you overcome these challenges.
Software Test Automation - A Comprehensive Guide on Automated Testing.pdfkalichargn70th171
Moving to a more digitally focused era, the importance of software is rapidly increasing. Software tools are crucial for upgrading life standards, enhancing business prospects, and making a smart world. The smooth and fail-proof functioning of the software is very critical, as a large number of people are dependent on them.
Nashik's top web development company, Upturn India Technologies, crafts innovative digital solutions for your success. Partner with us and achieve your goals
Why Apache Kafka Clusters Are Like Galaxies (And Other Cosmic Kafka Quandarie...Paul Brebner
Closing talk for the Performance Engineering track at Community Over Code EU (Bratislava, Slovakia, June 5 2024) https://eu.communityovercode.org/sessions/2024/why-apache-kafka-clusters-are-like-galaxies-and-other-cosmic-kafka-quandaries-explored/ Instaclustr (now part of NetApp) manages 100s of Apache Kafka clusters of many different sizes, for a variety of use cases and customers. For the last 7 years I’ve been focused outwardly on exploring Kafka application development challenges, but recently I decided to look inward and see what I could discover about the performance, scalability and resource characteristics of the Kafka clusters themselves. Using a suite of Performance Engineering techniques, I will reveal some surprising discoveries about cosmic Kafka mysteries in our data centres, related to: cluster sizes and distribution (using Zipf’s Law), horizontal vs. vertical scalability, and predicting Kafka performance using metrics, modelling and regression techniques. These insights are relevant to Kafka developers and operators.
Operational ease MuleSoft and Salesforce Service Cloud Solution v1.0.pptx
Set Your Data In Motion - CTO Roundtable
1. Event Streaming CTO Roundtable
Real-World Use Cases for Data in Motion with Cloud-native Architectures
Kai Waehner
Field CTO
kai.waehner@confluent.io
linkedin.com/in/kaiwaehner
@KaiWaehner
confluent.io
kai-waehner.de
2. @KaiWaehner - www.kai-waehner.de – Cloud-native Event Streaming CTO Roundtable
Agenda
• Data in Motion with Event Streaming
• Streaming ETL Pipelines
• IT Modernisation and Hybrid Multi-Cloud
• Customer Experience and Customer 360
• IoT and Big Data Processing
• Machine Learning and Analytics
3. @KaiWaehner - www.kai-waehner.de – Cloud-native Event Streaming CTO Roundtable
Agenda
• Data in Motion with Event Streaming
• Streaming ETL Pipelines
• IT Modernisation and Hybrid Multi-Cloud
• Customer Experience and Customer 360
• IoT and Big Data Processing
• Machine Learning and Analytics
4. @KaiWaehner - www.kai-waehner.de – Cloud-native Event Streaming CTO Roundtable
This is a fundamental paradigm shift...
4
Infrastructure
as code
Data in motion
as continuous
streams of events
Future of the
datacenter
Future of data
Cloud-
Native
Event
Streaming
6. @KaiWaehner - www.kai-waehner.de – Cloud-native Event Streaming CTO Roundtable
Apache Kafka is the Platform for Data in Motion
MES
ERP
Sensors
Mobile
Customer 360
Real-time
Alerting System
Data
warehouse
Producers
Consumers
Streams and storage of real time events
Stream
processing
apps
Connectors
Connectors
Stream
processing
apps
Supplier
Alert
Forecast
Inventory Customer
Order
6
7. @KaiWaehner - www.kai-waehner.de – Cloud-native Event Streaming CTO Roundtable
Car Engine Car Self-driving Car
Confluent completes Apache Kafka. Cloud-native. Everywhere.
8. @KaiWaehner - www.kai-waehner.de – Cloud-native Event Streaming CTO Roundtable
Agenda
• Data in Motion with Event Streaming
• Streaming ETL Pipelines
• IT Modernisation and Hybrid Multi-Cloud
• Customer Experience and Customer 360
• IoT and Big Data Processing
• Machine Learning and Analytics
9. @KaiWaehner - www.kai-waehner.de – Cloud-native Event Streaming CTO Roundtable
Lambda Architecture
Option 1: Unified serving layer
9
Data
Source
Real-Time Layer
(Data Processing in Motion)
Batch Layer
(Data Processing at Rest)
Serving Layer
Real-Time App
(Data Processing in Motion)
Batch App
(Data Processing at Rest)
ms
min/hr
10. @KaiWaehner - www.kai-waehner.de – Cloud-native Event Streaming CTO Roundtable
10
Data
Source
Real-Time Layer
(Data Processing in Motion)
Batch Layer
(Data Processing at Rest)
Real-time Query
Mixed Query
ms
min/hr
Speed
View
Batch
View
Batch Query
Lambda Architecture
Option 2: Separate serving layers
11. @KaiWaehner - www.kai-waehner.de – Cloud-native Event Streaming CTO Roundtable
11
Data
Source
Real-Time Layer
(Data Processing in Motion)
Real-Time App
(Data Processing in Motion)
Storage
Batch App
(Data Processing at Rest)
Storage
ms
min/hr
Storage
Kappa Architecture
One pipeline for real-time and batch consumers
13. @KaiWaehner - www.kai-waehner.de – Cloud-native Event Streaming CTO Roundtable
Kappa @ Shopify
13
Kappa Building Blocks
The Log (Kafka)
Durability with Topic Compaction and Tiered Storage
Consistency via Exactly-Once Semantics (EOS)
Data Integration via Kafka Connect
Elasticity via dynamic Kafka clusters
Streaming Framework (Kafka Streams / Flink)
Reliability and scalability
Fault tolerance
State management
Sinks
Update/Upsert for simplified design:
RDBMS, NoSQL, Compacted Kafka Topics
Append-only: Regular Kafka Topics, Time Series
14. @KaiWaehner - www.kai-waehner.de – Cloud-native Event Streaming CTO Roundtable
Kappa @ Disney
14
www.kai-waehner.de | @KaiWaehner | Streaming Machine Learning without a Data Lake
15. @KaiWaehner - www.kai-waehner.de – Cloud-native Event Streaming CTO Roundtable
Agenda
• Data in Motion with Event Streaming
• Streaming ETL Pipelines
• IT Modernisation and Hybrid Multi-Cloud
• Customer Experience and Customer 360
• IoT and Big Data Processing
• Machine Learning and Analytics
16. @KaiWaehner - www.kai-waehner.de – Cloud-native Event Streaming CTO Roundtable
Mainframe Offloading
Journey from Mainframe
to Hybrid* and Cloud
PHASE 3
Hybrid
Replication
Mainframe
Replacement
PHASE 2
PHASE 1
* with or without the mainframe
17. @KaiWaehner - www.kai-waehner.de – Cloud-native Event Streaming CTO Roundtable
Strangler Design Pattern - A Big Bang will FAIL !!!
https://paulhammant.com/2013/07/14/legacy-application-strangulation-case-studies/
18. @KaiWaehner - www.kai-waehner.de – Cloud-native Event Streaming CTO Roundtable
Year 0: Direct Communication between Mainframe and App
Application
1) Direct Legacy Mainframe Communication to App
Date Amount
1/27/2017 $4.56
1/22/2017 $32.14
Core Banking ‘1970’
(Mainframe)
19. @KaiWaehner - www.kai-waehner.de – Cloud-native Event Streaming CTO Roundtable
Year 1: Kafka for Decoupling between Mainframe and App
Application
1) Direct Legacy Mainframe Communication to App
2) Kafka for Decoupling between Mainframe and App
Date Amount
1/27/2017 $4.56
1/22/2017 $32.14
Core Banking ‘1970’
(Mainframe)
Mainframe Integration
- Change Data Capture (IIDR)
- Kafka Connect (JMS, MQ, JDBC)
- REST Proxy
- Kafka Client
- 3rd Party CDC Tool
20. @KaiWaehner - www.kai-waehner.de – Cloud-native Event Streaming CTO Roundtable
Year 2 to 4: New Projects and Applications
Application
Microservices
Agile, Lightweight
(but Scalable, Robust)
Applications
Big Data Project (Elastic,
Spark,
AWS Services, …)
1) Direct Legacy Mainframe Communication to App
2) Kafka for Decoupling between Mainframe and App
3) New Projects and Applications
External
Solution
Date Amount
1/27/2017 $4.56
1/22/2017 $32.14
Core Banking ‘1970’
(Mainframe)
Mainframe Integration
- Change Data Capture (IIDR)
- Kafka Connect (JMS, MQ, JDBC)
- REST Proxy
- Kafka Client
- 3rd Party CDC Tool
21. @KaiWaehner - www.kai-waehner.de – Cloud-native Event Streaming CTO Roundtable
Year 5: Mainframe Replacement
Application
Microservices
Agile, Lightweight
(but Scalable, Robust)
Applications
Big Data Project (Elastic,
Spark,
AWS Services, …)
1) Direct Legacy Mainframe Communication to App
2) Kafka for Decoupling between Mainframe and App
3) New Projects and Applications
4) Mainframe Replacement
External
Solution
Core Banking ‘2020’
(Modern Technology)
Date Amount
1/27/2017 $4.56
1/22/2017 $32.14
22. @KaiWaehner - www.kai-waehner.de – Cloud-native Event Streaming CTO Roundtable
Integration Platform
for legacy and modern technologies
23. @KaiWaehner - www.kai-waehner.de – Cloud-native Event Streaming CTO Roundtable
Global Event Streaming
Aggregate Small Footprint
Edge Deployments with
Replication (Aggregation)
Simplify Disaster Recovery
Operations with
Multi-Region Clusters
with RPO=0 and RTO=0
Stream Data Globally with
Replication and Cluster Linking
23
24. @KaiWaehner - www.kai-waehner.de – Cloud-native Event Streaming CTO Roundtable
Domain
Data
Product
Focus on Business and Data Products with Decoupled Microservices
Data
Mesh
Mesh is a logical view,
not physical!
25. @KaiWaehner - www.kai-waehner.de – Cloud-native Event Streaming CTO Roundtable
Agenda
• Data in Motion with Event Streaming
• Streaming ETL Pipelines
• IT Modernisation and Hybrid Multi-Cloud
• Customer Experience and Customer 360
• IoT and Big Data Processing
• Machine Learning and Analytics
26. @KaiWaehner - www.kai-waehner.de – Cloud-native Event Streaming CTO Roundtable
Context-specific Customer 360
26
Electrical retailer
Hyper-personalized online retail experience,
turning each customer visit into a one-on-one
marketing opportunity
Correlation of historical customer data with real-
time digital signals
Maximize customer satisfaction and revenue
growth, increased customer conversions
https://www.confluent.io/customers/ao/
27. @KaiWaehner - www.kai-waehner.de – Cloud-native Event Streaming CTO Roundtable
Innovative Business Modell
• Clothing rental subscription service
• Very different from a typical e-commerce model
• Need for a real-time event driven architecture
Benefits of serverless Confluent Cloud
• Cut launch time from over a year to 6 months
• Stable production ops set up in 1 week vs. 6 months
• Administrative overhead reduced by 10
27
https://www.confluent.io/customers/nuuly/
28. @KaiWaehner - www.kai-waehner.de – Cloud-native Event Streaming CTO Roundtable
‘My Porsche’
A digital service platform for customers, fans, and enthusiasts
28
https://medium.com/porschedev
29. @KaiWaehner - www.kai-waehner.de – Cloud-native Event Streaming CTO Roundtable
Omnichannel Retail
Time
P
C3 C2
C1
Sales Talk on site in
Car Dealership
Right now
Location-based
Customer Action
Customer 360
(Website, Mobile App, On Site in Store, In-Car)
Car Configurator
10 and 8 days ago
Context-specific
Marketing Campaign
90 and 60 days ago
31. @KaiWaehner - www.kai-waehner.de – Cloud-native Event Streaming CTO Roundtable
Agenda
• Data in Motion with Event Streaming
• Streaming ETL Pipelines
• IT Modernisation and Hybrid Multi-Cloud
• Customer Experience and Customer 360
• IoT and Big Data Processing
• Machine Learning and Analytics
32. @KaiWaehner - www.kai-waehner.de – Cloud-native Event Streaming CTO Roundtable
CRM
Real-Time Location System
(RTLS) for Asset Tracking
Customer data
Advanced Planning
and Scheduling (APS)
Manager
Get report
API
Customer Customer
Customer
data
Truck
schedule
Payment
data
Route
details
Streams of real time events
Customer
data
Train
schedule
Payment
data
Loyalty
information
Streams of real time events
Customer
data
Train
schedule
Payment
data
Loyalty
information
Streams of real time events
Wavelength
Cloud VPC
Carrier #1 5G
Wavelength
Carrier #2 5G
32
Hybrid Streaming Data Exchange
34. @KaiWaehner - www.kai-waehner.de – Cloud-native Event Streaming CTO Roundtable
BMW Group
Mission-critical workloads at the edge and in the cloud
• Why Kafka? Decoupling. Transparency. Innovation.
• Why Confluent? Stability is key in manufacturing
• Decoupling between logistics and production systems
• Provide edge platform (self-managed) + Azure Cloud (fully-managed) + bidirectional
integration
• Use case
• Logistics and supply chain in global plants
• Right stock in place (physically and in ERP systems like SAP)
• Just in time, just in sequence
• Lot of critical applications
34
Jay Kreps, Confluent CEO
Felix Böhm, BMW Plant Digitalization and Cloud Transformation
Keynote at Kafka Summit Eurpoe 2021:
https://www.youtube.com/watch?v=3cG2ud7TRs4
(My Notes from the BMW Keynote at Kafka Summit EU 2021)
35. @KaiWaehner - www.kai-waehner.de – Cloud-native Event Streaming CTO Roundtable
Agenda
• Data in Motion with Event Streaming
• Streaming ETL Pipelines
• IT Modernisation and Hybrid Multi-Cloud
• Customer Experience and Customer 360
• IoT and Big Data Processing
• Machine Learning and Analytics
37. @KaiWaehner - www.kai-waehner.de – Cloud-native Event Streaming CTO Roundtable
Apache Kafka’s Open Ecosystem as Infrastructure for ML
Kafka
Streams/
ksqlDB
Kafka Connect
Confluent REST Proxy
Confluent Schema Registry
Go/.NET/Python
Kafka Producer
ksqlDB
Python
Client
38. @KaiWaehner - www.kai-waehner.de – Cloud-native Event Streaming CTO Roundtable
Direct streaming ingestion
for model training
with TensorFlow I/O + Kafka Plugin
(no additional data storage
like S3 or HDFS required!)
Time
Model B
Model A
Producer
Distributed
Commit Log
Streaming Ingestion and Model Training
with TensorFlow IO
https://github.com/tensorflow/io
38
Model X
(at a later time)
40. @KaiWaehner - www.kai-waehner.de – Cloud-native Event Streaming CTO Roundtable
“CREATE STREAM AnomalyDetection AS
SELECT sensor_id, detectAnomaly(sensor_values)
FROM car_engine;“
User Defined Function (UDF)
Model Deployment with
Apache Kafka, ksqlDB
and TensorFlow
41
41. @KaiWaehner - www.kai-waehner.de – Cloud-native Event Streaming CTO Roundtable
Fraud Detection @ Grab
GrabDefence SaaS service build with Confluent Cloud, Kafka Streams and ML for stateful stream processing
Billions of fraud and safety detections performed daily for millions of transactions (1.6% is lost in fraud in Southeast Asia)
42. @KaiWaehner - www.kai-waehner.de – Cloud-native Event Streaming CTO Roundtable
Car Engine Car Self-driving Car
Confluent completes Apache Kafka. Cloud-native. Everywhere.