Apache Kafka is an open source event streaming platform. It is often used to complement or even replace existing middleware to integrate applications and build microservice architectures. Apache Kafka is already used in various projects in almost every bigger company today. Understood, battled-tested, highly scalable, reliable, real-time.
Blockchain is a different story. This technology is a lot in the news, especially related to cryptocurrencies like Bitcoin. But what is the added value for software architectures? Is blockchain just hype and adds complexity? Or will it be used by everybody in the future, like a web browser or mobile app today? And how is it related to an integration architecture and event streaming platform?
This session explores use cases for blockchains and discusses different alternatives such as Hyperledger, Ethereum and a Kafka-native tamper-proof blockchain implementation. Different architectures are discussed to understand when blockchain really adds value and how it can be combined with the Apache Kafka ecosystem to integrate blockchain with the rest of the enterprise architecture to build a highly scalable and reliable event streaming infrastructure.
Speakers:
Kai Waehner, Technology Evangelist, Confluent
Stephen Reed, CTO, Co-Founder, AiB
Stream me to the Cloud (and back) with Confluent & MongoDBconfluent
In this online talk, we’ll explore how and why companies are leveraging Confluent and MongoDB to modernize their architecture and leverage the scalability of the cloud and the velocity of streaming. Based upon a sample retail business scenario, we will explain how changes in an on-premise database are streamed via the Confluent Cloud to MongoDB Atlas and back.
How Apache Kafka helps to create Data Culture – How to Cross the Kafka Chasmconfluent
In this webinar we want to share our experience on how the Swiss Mobiliar, the biggest Swiss household insurance enterprise, introduced Kafka and led it to enterprise-wide adoption with the help of AGOORA.com.
Bridge Your Kafka Streams to Azure Webinarconfluent
With a fully managed Apache Kafka(R) as-a-service on Microsoft Azure, businesses can focus on building applications and not managing clusters. Build a persistent bridge from on-premises data systems to the cloud with a hybrid Kafka service or stream across public clouds for multi-cloud data pipelines.
In this session for business and technical data leaders, you can learn about powering business applications with the managed Kafka service that streams data into Azure SQL Data Warehouse, Cosmos DB, Azure Data Lake Storage and Azure Blob Storage.
Confluent Cloud for Apache Kafka® | Google Cloud Next ’19confluent
Google Cloud Next ’19
Speakers:
Gaetan Castelein, Confluent Product Marketing
Kir Titievsky, Google Product Management
Confluent Cloud for Apache Kafka® was a session conducted at Google Cloud Next ’19 on the topic of how Confluent and Google are partnering to give you a complete event-streaming platform that extends Kafka with essential capabilities for developers and enterprises. Confluent is available as a fully managed, first class service on GCP, or can be deployed on-premises on Google Cloud Services Platform. Developers can deploy Confluent Cloud™ in minutes right from the Google Cloud Console to start building event-driven applications. Enterprises can build hybrid cloud streaming solutions with a common platform that spans from on-premises to GCP, streaming data to GCP to leverage best-of-breed services such as BigQuery and TensorFlow. Review this presentation to learn about Confluent and GCP services, and see how you can get started in just minutes with no upfront commitment.
Apache Kafka® is the technology behind event streaming which is fast becoming the central nervous system of flexible, scalable, modern data architectures. Customers want to connect their databases, data warehouses, applications, microservices and more, to power the event streaming platform. To connect to Apache Kafka, you need a connector!
This online talk focuses on the key business drivers behind connecting to Kafka and introduces the new Confluent Verified Integrations Program. We’ll discuss what it takes to participate, the process and benefits of the program.
Redis and Kafka - Advanced Microservices Design Patterns SimplifiedAllen Terleto
The adoption and popularity of the microservices architecture continues to grow across a spectrum of enterprises in every industry. Although a consensus on an implementation standard has yet to be reached, advanced design patterns and lessons learned about the complexities and pitfalls of deploying microservices at scale have been established by thought leaders and the development community. With Redis and Kafka becoming de facto standards across most microservices architectures, we will discuss how their combination can be used to simplify the implementation of event-driven design patterns that will provide real-time performance, scalability, resiliency, traceability to ensure compliance, observability, reduced technology sprawl, and scale to thousands of services. In this discussion, we will decompose a real-time event-driven payment-processing microservices workflow to explore capturing telemetry data, event sourcing, CQRS, orchestrated SAGA workflows, inter-service communication, state machines, and more.
Apache Kafka® and Analytics in a Connected IoT Worldconfluent
Apache Kafka® and Analytics in a Connected IoT World, Kai Waehner, Sr. Solutions Engineer Advanced Technology Group, Confluent
https://www.meetup.com/Berlin-Apache-Kafka-Meetup-by-Confluent/events/273166575/
Stream me to the Cloud (and back) with Confluent & MongoDBconfluent
In this online talk, we’ll explore how and why companies are leveraging Confluent and MongoDB to modernize their architecture and leverage the scalability of the cloud and the velocity of streaming. Based upon a sample retail business scenario, we will explain how changes in an on-premise database are streamed via the Confluent Cloud to MongoDB Atlas and back.
How Apache Kafka helps to create Data Culture – How to Cross the Kafka Chasmconfluent
In this webinar we want to share our experience on how the Swiss Mobiliar, the biggest Swiss household insurance enterprise, introduced Kafka and led it to enterprise-wide adoption with the help of AGOORA.com.
Bridge Your Kafka Streams to Azure Webinarconfluent
With a fully managed Apache Kafka(R) as-a-service on Microsoft Azure, businesses can focus on building applications and not managing clusters. Build a persistent bridge from on-premises data systems to the cloud with a hybrid Kafka service or stream across public clouds for multi-cloud data pipelines.
In this session for business and technical data leaders, you can learn about powering business applications with the managed Kafka service that streams data into Azure SQL Data Warehouse, Cosmos DB, Azure Data Lake Storage and Azure Blob Storage.
Confluent Cloud for Apache Kafka® | Google Cloud Next ’19confluent
Google Cloud Next ’19
Speakers:
Gaetan Castelein, Confluent Product Marketing
Kir Titievsky, Google Product Management
Confluent Cloud for Apache Kafka® was a session conducted at Google Cloud Next ’19 on the topic of how Confluent and Google are partnering to give you a complete event-streaming platform that extends Kafka with essential capabilities for developers and enterprises. Confluent is available as a fully managed, first class service on GCP, or can be deployed on-premises on Google Cloud Services Platform. Developers can deploy Confluent Cloud™ in minutes right from the Google Cloud Console to start building event-driven applications. Enterprises can build hybrid cloud streaming solutions with a common platform that spans from on-premises to GCP, streaming data to GCP to leverage best-of-breed services such as BigQuery and TensorFlow. Review this presentation to learn about Confluent and GCP services, and see how you can get started in just minutes with no upfront commitment.
Apache Kafka® is the technology behind event streaming which is fast becoming the central nervous system of flexible, scalable, modern data architectures. Customers want to connect their databases, data warehouses, applications, microservices and more, to power the event streaming platform. To connect to Apache Kafka, you need a connector!
This online talk focuses on the key business drivers behind connecting to Kafka and introduces the new Confluent Verified Integrations Program. We’ll discuss what it takes to participate, the process and benefits of the program.
Redis and Kafka - Advanced Microservices Design Patterns SimplifiedAllen Terleto
The adoption and popularity of the microservices architecture continues to grow across a spectrum of enterprises in every industry. Although a consensus on an implementation standard has yet to be reached, advanced design patterns and lessons learned about the complexities and pitfalls of deploying microservices at scale have been established by thought leaders and the development community. With Redis and Kafka becoming de facto standards across most microservices architectures, we will discuss how their combination can be used to simplify the implementation of event-driven design patterns that will provide real-time performance, scalability, resiliency, traceability to ensure compliance, observability, reduced technology sprawl, and scale to thousands of services. In this discussion, we will decompose a real-time event-driven payment-processing microservices workflow to explore capturing telemetry data, event sourcing, CQRS, orchestrated SAGA workflows, inter-service communication, state machines, and more.
Apache Kafka® and Analytics in a Connected IoT Worldconfluent
Apache Kafka® and Analytics in a Connected IoT World, Kai Waehner, Sr. Solutions Engineer Advanced Technology Group, Confluent
https://www.meetup.com/Berlin-Apache-Kafka-Meetup-by-Confluent/events/273166575/
Concepts and Patterns for Streaming Services with KafkaQAware GmbH
Cloud Native Night March 2020, Mainz: Talk by Perry Krol (@perkrol, Confluent)
=== Please download slides if blurred! ===
Abstract: Proven approaches such as service-oriented and event-driven architectures are joined by newer techniques such as microservices, reactive architectures, DevOps, and stream processing. Many of these patterns are successful by themselves, but they provide a more holistic and compelling approach when applied together. In this session Confluent will provide insights how service-based architectures and stream processing tools such as Apache Kafka® can help you build business-critical systems. You will learn why streaming beats request-response based architectures in complex, contemporary use cases, and explain why replayable logs such as Kafka provide a backbone for both service communication and shared datasets.
Based on these principles, we will explore how event collaboration and event sourcing patterns increase safety and recoverability with functional, event-driven approaches, apply patterns including Event Sourcing and CQRS, and how to build multi-team systems with microservices and SOA using patterns such as “inside out databases” and “event streams as a source of truth”.
Apache Kafka vs. Integration Middleware (MQ, ETL, ESB) - Friends, Enemies or ...confluent
MQ, ETL and ESB middleware are often used as integration backbone between legacy applications, modern microservices and cloud services. This introduces several challenges and complexities like point-to-point integration or non-scalable architectures. This session discusses how to build a completely event-driven streaming platform leveraging Apache Kafka’s open source messaging, integration and streaming components to leverage distributed processing, fault-tolerance, rolling upgrades and the ability to reprocess events. Learn the differences between a event-driven streaming platform leveraging Apache Kafka and middleware like MQ, ETL and ESBs – including best practices and anti-patterns, but also how these concepts and tools complement each other in an enterprise architecture.
Technical Deep Dive: Using Apache Kafka to Optimize Real-Time Analytics in Fi...confluent
Watch this talk here: https://www.confluent.io/online-talks/using-apache-kafka-to-optimize-real-time-analytics-financial-services-iot-applications
When it comes to the fast-paced nature of capital markets and IoT, the ability to analyze data in real time is critical to gaining an edge. It’s not just about the quantity of data you can analyze at once, it’s about the speed, scale, and quality of the data you have at your fingertips.
Modern streaming data technologies like Apache Kafka and the broader Confluent platform can help detect opportunities and threats in real time. They can improve profitability, yield, and performance. Combining Kafka with Panopticon visual analytics provides a powerful foundation for optimizing your operations.
Use cases in capital markets include transaction cost analysis (TCA), risk monitoring, surveillance of trading and trader activity, compliance, and optimizing profitability of electronic trading operations. Use cases in IoT include monitoring manufacturing processes, logistics, and connected vehicle telemetry and geospatial data.
This online talk will include in depth practical demonstrations of how Confluent and Panopticon together support several key applications. You will learn:
-Why Apache Kafka is widely used to improve performance of complex operational systems
-How Confluent and Panopticon open new opportunities to analyze operational data in real time
-How to quickly identify and react immediately to fast-emerging trends, clusters, and anomalies
-How to scale data ingestion and data processing
-Build new analytics dashboards in minutes
Streamsheets and Apache Kafka – Interactively build real-time Dashboards and ...confluent
A powerful stream processing platform and an end-user friendly spreadsheet-interface, if this combination rings a bell, you should definitely attend our „Streamsheets and Apache Kafka“ webinar. While development is interactive with a web user interface, Streamsheets applications can run as mission-critical applications. They directly consume and produce event streams in Apache Kafka. One popular option is to run everything in the cloud leveraging the fully managed Confluent Cloud service on AWS, GCP or Azure. Without any coding or scripting, end-users leverage their existing spreadsheet skills to build customized streaming apps for analysis, dashboarding, condition monitoring or any kind of real-time pre-and post-processing of Kafka or KsqlDB streams and tables.
Hear Kai Waehner of Confluent and Kristian Raue of Cedalo on these topics:
• Where Apache Kafka and Streamsheets fit in the data ecosystem (Industrial IoT, Smart Energy, Clinical Applications, Finance Applications)
• Customer Story: How the Freiburg University Hospital uses Kafka and Streamsheets for dashboarding the utilization of clinical assets
• 15-Minutes Live Demonstration: Building a financial fraud detection dashboard based on Confluent Cloud, ksqlDB and Cedalo Cloud Streamsheets just using spreadsheet formulas.
Speaker:
Kai Waehner, Technology Evangelist, Confluent
Kristian Raue, Founder & Chief Technologist, cedalo
Achieve Sub-Second Analytics on Apache Kafka with Confluent and Implyconfluent
Presenters: Rachel Pedreschi, Senior Director, Solutions Engineering, Imply.io + Josh Treichel, Partner Solutions Architect, Confluent
Analytic pipelines running purely on batch processing systems can suffer from hours of data lag, resulting in accuracy issues with analysis and overall decision-making. Join us for a demo to learn how easy it is to integrate your Apache Kafka® streams in Apache Druid (incubating) to provide real-time insights into the data.
In this online talk, you’ll hear about ingesting your Kafka streams into Imply’s scalable analytic engine and gaining real-time insights via a modern user interface.
Register now to learn about:
-The benefits of combining a real-time streaming platform with a comprehensive analytics stack
-Building an analytics pipeline by integrating Confluent Platform and Imply
-How KSQL, streaming SQL for Kafka, can easily transform and filter streams of data in real time
-Querying and visualizing streaming data in Imply
-Practical ways to implement Confluent Platform and Imply to address common use cases such as analyzing network flows, collecting and monitoring IoT data and visualizing clickstream data
Confluent Platform, developed by the creators of Kafka, enables the ingest and processing of massive amounts of real-time event data. Imply, the complete analytics stack built on Druid, can ingest, store, query and visualize streaming data from Confluent Platform, enabling end-to-end real-time analytics. Together, Confluent and Imply can provide low latency data delivery, data transform, and data querying capabilities to power a range of use cases.
Should we manage events like APIs? | Kim Clark, IBMHostedbyConfluent
APIs have become ubiquitous as a way of exposing the capabilities of the enterprise both internally and externally. However, are APIs alone enough? There is a strong resurgence in interest in asynchronous communication and event driven architecture. Applications want to receive events immediately so they can respond in real time, and furthermore they also want the benefit of being decoupled from the availability and performance characteristics of the systems providing that data. However, whilst the way that APIs are socialized, exposed, versioned etc. is well matured in the form of API management technology. We are now on the cusp of seeing first class support for event endpoint management to provide the same sophistication for discovering, exposing and consuming events.
Kai Waehner [Confluent] | Real-Time Streaming Analytics with 100,000 Cars Usi...InfluxData
Kai Waehner [Confluent] | Real-Time Streaming Analytics with 100,000 Cars Using MQTT, Kafka and InfluxDB 2.0 on Kubernetes | InfluxDays Virtual Experience London 2020
The Rise Of Event Streaming – Why Apache Kafka Changes EverythingKai Wähner
Business digitalization trends like microservices, the Internet of Things or Machine Learning are driving the need to process events at a whole new scale, speed and efficiency. Traditional solutions like ETL/data integration or messaging are not build to serve these needs.
Today, the open source project Apache Kafka® is being used by thousands of companies including over 60% of the Fortune 100 to power and innovate their businesses by focusing their data strategies around event-driven architectures leveraging event streaming.We will discuss the market and technology changes that have given rise to Kafka and to Event Streaming, and we will introduce the audience to the key aspects of building an Event streaming platform with Kafka. Examples of productive use cases from the automotive, manufacturing and transportation sector will showcase the power of event streaming.
Event streaming: A paradigm shift in enterprise software architectureSina Sojoodi
This talk helps developers and architects understand the benefits, opportunities and challenges in moving from traditional point-to-point integration in application architecture to one with event streaming. Apache Kafka and Spring provide a solid foundation for enterprise and large organizations to implement event streaming solutions. Examples and common patterns are covered
towards the end.
Many thanks to James Watters and all the original content authors, editors and aggregators referenced in the slides.
Architecture patterns for distributed, hybrid, edge and global Apache Kafka d...Kai Wähner
Architecture patterns for distributed, hybrid, edge and global Apache Kafka deployments
Multi-cluster and cross-data center deployments of Apache Kafka have become the norm rather than an exception. This session gives an overview of several scenarios that may require multi-cluster solutions and discusses real-world examples with their specific requirements and trade-offs, including disaster recovery, aggregation for analytics, cloud migration, mission-critical stretched deployments and global Kafka.
Key takeaways:
In many scenarios, one Kafka cluster is not enough. Understand different architectures and alternatives for multi-cluster deployments.
Zero data loss and high availability are two key requirements. Understand how to realize this, including trade-offs.
Learn about features and limitations of Kafka for multi cluster deployments
Global Kafka and mission-critical multi-cluster deployments with zero data loss and high availability became the normal, not an exception.
Top 5 Event Streaming Use Cases for 2021 with Apache KafkaKai Wähner
Apache Kafka and Event Streaming are two of the most relevant buzzwords in tech these days. Ever wonder what the predicted TOP 5 Event Streaming Architectures and Use Cases for 2021 are? Check out the following presentation. Learn about edge deployments, hybrid and multi-cloud architectures, service mesh-based microservices, streaming machine learning, and cybersecurity.
On-demand video recording: https://videos.confluent.io/watch/XAjxV3j8hzwCcEKoZVErUJ
Build a Bridge to Cloud with Apache Kafka® for Data Analytics Cloud Servicesconfluent
Build a Bridge to Cloud with Apache Kafka® for Data Analytics Cloud Services, Perry Krol, Head of Systems Engineering, CEMEA, Confluent
https://www.meetup.com/Frankfurt-Apache-Kafka-Meetup-by-Confluent/events/269751169/
Apache Kafka as Event Streaming Platform for Microservice ArchitecturesKai Wähner
This session introduces Apache Kafka, an event-driven open source streaming platform. Apache Kafka goes far beyond scalable, high volume messaging. In addition, you can leverage Kafka Connect for integration and the Kafka Streams API for building lightweight stream processing microservices in autonomous teams. The Confluent Platform adds further components such as a Schema Registry, REST Proxy, KSQL, Clients for different programming languages and Connectors for different technologies.
The session discusses how tech giants like LinkedIn, Ebay or Airbnb leverage Apache Kafka as event streaming platform to solve various different business problems and how to create a scalable, flexible microservice architecture. A live demo shows how you can easily process and analyze streams of events using Apache Kafka and KSQL.
A guide through the Azure Messaging services - Update ConferenceEldert Grootenboer
https://www.updateconference.net/en/2019/session/a-guide-through-the-azure-messaging-services
A guide through the Azure Messaging services - Update Conference
Work is a Stream of Applications (Audun Strand, NAV) Kafka Summit London 2019confluent
NAV handles a lot of applications for our benefits(sickness, unemployment etc.). We are rewriting the systems handling these applications to a stream based approach, placing Kafka-streams in the centre of our architecture. This talk will go through our approach to building these systems. We are mostly using Kotling as our programming language, and are running our applications on Kubernetes. We use Change-data-capture to extract data from our legacy systems, to ease the pain of migration. By treating our data, and our applications as streams we get a lot of benefits. We can rerun our applications with different rules, to simulate the effect of policy changes. We can do real time evaluation of applications, to improve the user experience. We also get better robustness, as we remove the runtime dependency to our core systems, and can receive applications even when other parts of the systems are down. The talk will also present the architecural rules we are implementing for NAV, using streams as the main communication pattern between organization units, and some learnings from our current work on a data platform to support Life is a stream of events.
Introducing new features in Confluent Platform 5.4 and Apache Kafka 2.4...
CP 5.4 (based on AK 2.4)
Security:
Role-Based Access Control (RBAC)
Structured Audit Logs
Resilience:
Multi-Region Clusters (MRC)
Data Compatibility:
Server-side Schema Validation
Management & Monitoring:
Control Center enhancements
RBAC management
Replicator monitoring
Performance & Elasticity:
Tiered Storage (preview)
Stream Processing:
New ksqlDB features like Pull Queries and Kafka Connect Integration (preview)
GCP for Apache Kafka® Users: Stream Ingestion and Processingconfluent
Watch this talk here: https://www.confluent.io/online-talks/gcp-for-apache-kafka-users-stream-ingestion-processing
In private and public clouds, stream analytics commonly means stateless processing systems organized around Apache Kafka® or a similar distributed log service. GCP took a somewhat different tack, with Cloud Pub/Sub, Dataflow, and BigQuery, distributing the responsibility for processing among ingestion, processing and database technologies.
We compare the two approaches to data integration and show how Dataflow allows you to join and transform and deliver data streams among on-prem and cloud Apache Kafka clusters, Cloud Pub/Sub topics and a variety of databases. The session will have a mix of architectural discussions and practical code reviews of Dataflow-based pipelines.
Apache Kafka and Blockchain - Comparison and a Kafka-native ImplementationKai Wähner
Apache Kafka is an open-source event streaming platform used to complement or replace existing middleware, integrate applications, and build microservice architectures. Used at almost every large company today, it's understood, battled-tested, highly scalable, and reliable.
Blockchain is a different story. Being related to cryptocurrencies like Bitcoin, it's often in the news. But what is the value of software architecture? And how is it related to an integration architecture and event streaming platform?
This session explores blockchain use cases and different alternatives such as Hyperledger, Ethereum, and Kafka-native blockchain implementation. We discuss the value blockchain brings for different architectures, and how it can be integrated with the Kafka ecosystem to build a highly scalable and reliable event streaming infrastructure.
This talk discusses the concepts, use cases, and architectures behind Event Streaming, Apache Kafka, Distributed Ledger (DLT), and Blockchain. A comparison of different technologies such as Confluent, AIBlockchain, Hyperledger, Ethereum, Ripple, IOTA, and Libra explores when to use Kafka, a Kafka-native blockchain, a dedicated blockchain, or Kafka in conjunction with another blockchain.
Resilient Real-time Data Streaming across the Edge and Hybrid Cloud with Apac...Kai Wähner
Hybrid cloud architectures are the new black for most companies. A cloud-first strategy is evident for many new enterprise architectures, but some use cases require resiliency across edge sites and multiple cloud regions. Data streaming with the Apache Kafka ecosystem is a perfect technology for building resilient and hybrid real-time applications at any scale. This talk explores different architectures and their trade-offs for transactional and analytical workloads. Real-world examples include financial services, retail, and the automotive industry.
Video recording:
https://qconlondon.com/london2022/presentation/resilient-real-time-data-streaming-across-the-edge-and-hybrid-cloud
Concepts and Patterns for Streaming Services with KafkaQAware GmbH
Cloud Native Night March 2020, Mainz: Talk by Perry Krol (@perkrol, Confluent)
=== Please download slides if blurred! ===
Abstract: Proven approaches such as service-oriented and event-driven architectures are joined by newer techniques such as microservices, reactive architectures, DevOps, and stream processing. Many of these patterns are successful by themselves, but they provide a more holistic and compelling approach when applied together. In this session Confluent will provide insights how service-based architectures and stream processing tools such as Apache Kafka® can help you build business-critical systems. You will learn why streaming beats request-response based architectures in complex, contemporary use cases, and explain why replayable logs such as Kafka provide a backbone for both service communication and shared datasets.
Based on these principles, we will explore how event collaboration and event sourcing patterns increase safety and recoverability with functional, event-driven approaches, apply patterns including Event Sourcing and CQRS, and how to build multi-team systems with microservices and SOA using patterns such as “inside out databases” and “event streams as a source of truth”.
Apache Kafka vs. Integration Middleware (MQ, ETL, ESB) - Friends, Enemies or ...confluent
MQ, ETL and ESB middleware are often used as integration backbone between legacy applications, modern microservices and cloud services. This introduces several challenges and complexities like point-to-point integration or non-scalable architectures. This session discusses how to build a completely event-driven streaming platform leveraging Apache Kafka’s open source messaging, integration and streaming components to leverage distributed processing, fault-tolerance, rolling upgrades and the ability to reprocess events. Learn the differences between a event-driven streaming platform leveraging Apache Kafka and middleware like MQ, ETL and ESBs – including best practices and anti-patterns, but also how these concepts and tools complement each other in an enterprise architecture.
Technical Deep Dive: Using Apache Kafka to Optimize Real-Time Analytics in Fi...confluent
Watch this talk here: https://www.confluent.io/online-talks/using-apache-kafka-to-optimize-real-time-analytics-financial-services-iot-applications
When it comes to the fast-paced nature of capital markets and IoT, the ability to analyze data in real time is critical to gaining an edge. It’s not just about the quantity of data you can analyze at once, it’s about the speed, scale, and quality of the data you have at your fingertips.
Modern streaming data technologies like Apache Kafka and the broader Confluent platform can help detect opportunities and threats in real time. They can improve profitability, yield, and performance. Combining Kafka with Panopticon visual analytics provides a powerful foundation for optimizing your operations.
Use cases in capital markets include transaction cost analysis (TCA), risk monitoring, surveillance of trading and trader activity, compliance, and optimizing profitability of electronic trading operations. Use cases in IoT include monitoring manufacturing processes, logistics, and connected vehicle telemetry and geospatial data.
This online talk will include in depth practical demonstrations of how Confluent and Panopticon together support several key applications. You will learn:
-Why Apache Kafka is widely used to improve performance of complex operational systems
-How Confluent and Panopticon open new opportunities to analyze operational data in real time
-How to quickly identify and react immediately to fast-emerging trends, clusters, and anomalies
-How to scale data ingestion and data processing
-Build new analytics dashboards in minutes
Streamsheets and Apache Kafka – Interactively build real-time Dashboards and ...confluent
A powerful stream processing platform and an end-user friendly spreadsheet-interface, if this combination rings a bell, you should definitely attend our „Streamsheets and Apache Kafka“ webinar. While development is interactive with a web user interface, Streamsheets applications can run as mission-critical applications. They directly consume and produce event streams in Apache Kafka. One popular option is to run everything in the cloud leveraging the fully managed Confluent Cloud service on AWS, GCP or Azure. Without any coding or scripting, end-users leverage their existing spreadsheet skills to build customized streaming apps for analysis, dashboarding, condition monitoring or any kind of real-time pre-and post-processing of Kafka or KsqlDB streams and tables.
Hear Kai Waehner of Confluent and Kristian Raue of Cedalo on these topics:
• Where Apache Kafka and Streamsheets fit in the data ecosystem (Industrial IoT, Smart Energy, Clinical Applications, Finance Applications)
• Customer Story: How the Freiburg University Hospital uses Kafka and Streamsheets for dashboarding the utilization of clinical assets
• 15-Minutes Live Demonstration: Building a financial fraud detection dashboard based on Confluent Cloud, ksqlDB and Cedalo Cloud Streamsheets just using spreadsheet formulas.
Speaker:
Kai Waehner, Technology Evangelist, Confluent
Kristian Raue, Founder & Chief Technologist, cedalo
Achieve Sub-Second Analytics on Apache Kafka with Confluent and Implyconfluent
Presenters: Rachel Pedreschi, Senior Director, Solutions Engineering, Imply.io + Josh Treichel, Partner Solutions Architect, Confluent
Analytic pipelines running purely on batch processing systems can suffer from hours of data lag, resulting in accuracy issues with analysis and overall decision-making. Join us for a demo to learn how easy it is to integrate your Apache Kafka® streams in Apache Druid (incubating) to provide real-time insights into the data.
In this online talk, you’ll hear about ingesting your Kafka streams into Imply’s scalable analytic engine and gaining real-time insights via a modern user interface.
Register now to learn about:
-The benefits of combining a real-time streaming platform with a comprehensive analytics stack
-Building an analytics pipeline by integrating Confluent Platform and Imply
-How KSQL, streaming SQL for Kafka, can easily transform and filter streams of data in real time
-Querying and visualizing streaming data in Imply
-Practical ways to implement Confluent Platform and Imply to address common use cases such as analyzing network flows, collecting and monitoring IoT data and visualizing clickstream data
Confluent Platform, developed by the creators of Kafka, enables the ingest and processing of massive amounts of real-time event data. Imply, the complete analytics stack built on Druid, can ingest, store, query and visualize streaming data from Confluent Platform, enabling end-to-end real-time analytics. Together, Confluent and Imply can provide low latency data delivery, data transform, and data querying capabilities to power a range of use cases.
Should we manage events like APIs? | Kim Clark, IBMHostedbyConfluent
APIs have become ubiquitous as a way of exposing the capabilities of the enterprise both internally and externally. However, are APIs alone enough? There is a strong resurgence in interest in asynchronous communication and event driven architecture. Applications want to receive events immediately so they can respond in real time, and furthermore they also want the benefit of being decoupled from the availability and performance characteristics of the systems providing that data. However, whilst the way that APIs are socialized, exposed, versioned etc. is well matured in the form of API management technology. We are now on the cusp of seeing first class support for event endpoint management to provide the same sophistication for discovering, exposing and consuming events.
Kai Waehner [Confluent] | Real-Time Streaming Analytics with 100,000 Cars Usi...InfluxData
Kai Waehner [Confluent] | Real-Time Streaming Analytics with 100,000 Cars Using MQTT, Kafka and InfluxDB 2.0 on Kubernetes | InfluxDays Virtual Experience London 2020
The Rise Of Event Streaming – Why Apache Kafka Changes EverythingKai Wähner
Business digitalization trends like microservices, the Internet of Things or Machine Learning are driving the need to process events at a whole new scale, speed and efficiency. Traditional solutions like ETL/data integration or messaging are not build to serve these needs.
Today, the open source project Apache Kafka® is being used by thousands of companies including over 60% of the Fortune 100 to power and innovate their businesses by focusing their data strategies around event-driven architectures leveraging event streaming.We will discuss the market and technology changes that have given rise to Kafka and to Event Streaming, and we will introduce the audience to the key aspects of building an Event streaming platform with Kafka. Examples of productive use cases from the automotive, manufacturing and transportation sector will showcase the power of event streaming.
Event streaming: A paradigm shift in enterprise software architectureSina Sojoodi
This talk helps developers and architects understand the benefits, opportunities and challenges in moving from traditional point-to-point integration in application architecture to one with event streaming. Apache Kafka and Spring provide a solid foundation for enterprise and large organizations to implement event streaming solutions. Examples and common patterns are covered
towards the end.
Many thanks to James Watters and all the original content authors, editors and aggregators referenced in the slides.
Architecture patterns for distributed, hybrid, edge and global Apache Kafka d...Kai Wähner
Architecture patterns for distributed, hybrid, edge and global Apache Kafka deployments
Multi-cluster and cross-data center deployments of Apache Kafka have become the norm rather than an exception. This session gives an overview of several scenarios that may require multi-cluster solutions and discusses real-world examples with their specific requirements and trade-offs, including disaster recovery, aggregation for analytics, cloud migration, mission-critical stretched deployments and global Kafka.
Key takeaways:
In many scenarios, one Kafka cluster is not enough. Understand different architectures and alternatives for multi-cluster deployments.
Zero data loss and high availability are two key requirements. Understand how to realize this, including trade-offs.
Learn about features and limitations of Kafka for multi cluster deployments
Global Kafka and mission-critical multi-cluster deployments with zero data loss and high availability became the normal, not an exception.
Top 5 Event Streaming Use Cases for 2021 with Apache KafkaKai Wähner
Apache Kafka and Event Streaming are two of the most relevant buzzwords in tech these days. Ever wonder what the predicted TOP 5 Event Streaming Architectures and Use Cases for 2021 are? Check out the following presentation. Learn about edge deployments, hybrid and multi-cloud architectures, service mesh-based microservices, streaming machine learning, and cybersecurity.
On-demand video recording: https://videos.confluent.io/watch/XAjxV3j8hzwCcEKoZVErUJ
Build a Bridge to Cloud with Apache Kafka® for Data Analytics Cloud Servicesconfluent
Build a Bridge to Cloud with Apache Kafka® for Data Analytics Cloud Services, Perry Krol, Head of Systems Engineering, CEMEA, Confluent
https://www.meetup.com/Frankfurt-Apache-Kafka-Meetup-by-Confluent/events/269751169/
Apache Kafka as Event Streaming Platform for Microservice ArchitecturesKai Wähner
This session introduces Apache Kafka, an event-driven open source streaming platform. Apache Kafka goes far beyond scalable, high volume messaging. In addition, you can leverage Kafka Connect for integration and the Kafka Streams API for building lightweight stream processing microservices in autonomous teams. The Confluent Platform adds further components such as a Schema Registry, REST Proxy, KSQL, Clients for different programming languages and Connectors for different technologies.
The session discusses how tech giants like LinkedIn, Ebay or Airbnb leverage Apache Kafka as event streaming platform to solve various different business problems and how to create a scalable, flexible microservice architecture. A live demo shows how you can easily process and analyze streams of events using Apache Kafka and KSQL.
A guide through the Azure Messaging services - Update ConferenceEldert Grootenboer
https://www.updateconference.net/en/2019/session/a-guide-through-the-azure-messaging-services
A guide through the Azure Messaging services - Update Conference
Work is a Stream of Applications (Audun Strand, NAV) Kafka Summit London 2019confluent
NAV handles a lot of applications for our benefits(sickness, unemployment etc.). We are rewriting the systems handling these applications to a stream based approach, placing Kafka-streams in the centre of our architecture. This talk will go through our approach to building these systems. We are mostly using Kotling as our programming language, and are running our applications on Kubernetes. We use Change-data-capture to extract data from our legacy systems, to ease the pain of migration. By treating our data, and our applications as streams we get a lot of benefits. We can rerun our applications with different rules, to simulate the effect of policy changes. We can do real time evaluation of applications, to improve the user experience. We also get better robustness, as we remove the runtime dependency to our core systems, and can receive applications even when other parts of the systems are down. The talk will also present the architecural rules we are implementing for NAV, using streams as the main communication pattern between organization units, and some learnings from our current work on a data platform to support Life is a stream of events.
Introducing new features in Confluent Platform 5.4 and Apache Kafka 2.4...
CP 5.4 (based on AK 2.4)
Security:
Role-Based Access Control (RBAC)
Structured Audit Logs
Resilience:
Multi-Region Clusters (MRC)
Data Compatibility:
Server-side Schema Validation
Management & Monitoring:
Control Center enhancements
RBAC management
Replicator monitoring
Performance & Elasticity:
Tiered Storage (preview)
Stream Processing:
New ksqlDB features like Pull Queries and Kafka Connect Integration (preview)
GCP for Apache Kafka® Users: Stream Ingestion and Processingconfluent
Watch this talk here: https://www.confluent.io/online-talks/gcp-for-apache-kafka-users-stream-ingestion-processing
In private and public clouds, stream analytics commonly means stateless processing systems organized around Apache Kafka® or a similar distributed log service. GCP took a somewhat different tack, with Cloud Pub/Sub, Dataflow, and BigQuery, distributing the responsibility for processing among ingestion, processing and database technologies.
We compare the two approaches to data integration and show how Dataflow allows you to join and transform and deliver data streams among on-prem and cloud Apache Kafka clusters, Cloud Pub/Sub topics and a variety of databases. The session will have a mix of architectural discussions and practical code reviews of Dataflow-based pipelines.
Apache Kafka and Blockchain - Comparison and a Kafka-native ImplementationKai Wähner
Apache Kafka is an open-source event streaming platform used to complement or replace existing middleware, integrate applications, and build microservice architectures. Used at almost every large company today, it's understood, battled-tested, highly scalable, and reliable.
Blockchain is a different story. Being related to cryptocurrencies like Bitcoin, it's often in the news. But what is the value of software architecture? And how is it related to an integration architecture and event streaming platform?
This session explores blockchain use cases and different alternatives such as Hyperledger, Ethereum, and Kafka-native blockchain implementation. We discuss the value blockchain brings for different architectures, and how it can be integrated with the Kafka ecosystem to build a highly scalable and reliable event streaming infrastructure.
This talk discusses the concepts, use cases, and architectures behind Event Streaming, Apache Kafka, Distributed Ledger (DLT), and Blockchain. A comparison of different technologies such as Confluent, AIBlockchain, Hyperledger, Ethereum, Ripple, IOTA, and Libra explores when to use Kafka, a Kafka-native blockchain, a dedicated blockchain, or Kafka in conjunction with another blockchain.
Resilient Real-time Data Streaming across the Edge and Hybrid Cloud with Apac...Kai Wähner
Hybrid cloud architectures are the new black for most companies. A cloud-first strategy is evident for many new enterprise architectures, but some use cases require resiliency across edge sites and multiple cloud regions. Data streaming with the Apache Kafka ecosystem is a perfect technology for building resilient and hybrid real-time applications at any scale. This talk explores different architectures and their trade-offs for transactional and analytical workloads. Real-world examples include financial services, retail, and the automotive industry.
Video recording:
https://qconlondon.com/london2022/presentation/resilient-real-time-data-streaming-across-the-edge-and-hybrid-cloud
Supply Chain Optimization with Apache KafkaKai Wähner
Supply Chain optimization leveraging Event Streaming with Apache Kafka. See real-world use cases and architectures from Walmart, BMW, Porsche, and other enterprises to improve the Supply Chain Management (SCM) processes. Automation, robustness, flexibility, real-time, decoupling, data integration, and hybrid deployments...
Video recording: https://youtu.be/dUkgungBmPs
Blog post: https://www.kai-waehner.de/apache-kafka-supply-chain-management-scm-optimization-scor-six-sigma-real-time
IoT Architectures for Apache Kafka and Event Streaming - Industry 4.0, Digita...Kai Wähner
The Internet of Things (IoT) is getting more and more traction as valuable use cases come to light. Whether you are in Healthcare, Telecommunications, Manufacturing, Banking or Retail to name a few industries, there is one key challenge and that's the integration of backend IoT data logs and applications, business services and cloud services to process the data in real time and at scale.
In this talk, we will be sharing how Kafka has become the leading technology used throughout the business to provide Real Time Event Streaming. Explore real life use cases of Kafka Connect, Kafka Streams and KSQL independent of the data deployment be it on a private or public Cloud, On Premise or at the Edge.
Audi - Connected car infrastructure
Robert Bosch Power Tools - Track and Trace of devices and people at construction areas
Deutsche Bahn - Customer 360 for train timetable updates
E.ON - IoT Streaming Platform to integrate and build smart home, smart building and smart grid infrastructures
Apache Kafka and API Management / API Gateway – Friends, Enemies or Frenemies...HostedbyConfluent
Microservices became the new black in enterprise architectures. APIs provide functions to other applications or end users. Even if your architecture uses another pattern than microservices, like SOA (Service-Oriented Architecture) or Client-Server communication, APIs are used between the different applications and end users.
Apache Kafka plays a key role in modern microservice architectures to build open, scalable, flexible and decoupled real time applications. API Management complements Kafka by providing a way to implement and govern the full life cycle of the APIs.
This session explores how event streaming with Apache Kafka and API Management (including API Gateway and Service Mesh technologies) complement and compete with each other depending on the use case and point of view of the project team. The session concludes exploring the vision of event streaming APIs instead of RPC calls.
Apache Kafka and API Management / API Gateway – Friends, Enemies or Frenemies?Kai Wähner
Microservices became the new black in enterprise architectures. APIs provide functions to other applications or end users. Even if your architecture uses another pattern than microservices, like SOA (Service-Oriented Architecture) or Client-Server communication, APIs are used between the different applications and end users.
Apache Kafka plays a key role in modern microservice architectures to build open, scalable, flexible and decoupled real time applications. API Management complements Kafka by providing a way to implement and govern the full life cycle of the APIs.
This session explores how event streaming with Apache Kafka and API Management (including API Gateway and Service Mesh technologies) complement and compete with each other depending on the use case and point of view of the project team. The session concludes exploring the vision of event streaming APIs instead of RPC calls.
Understand how event streaming with Kafka and Confluent complements tools and frameworks such as Kong, Mulesoft, Apigee, Envoy, Istio, Linkerd, Software AG, TIBCO Mashery, IBM, Axway, etc.
A Streaming API Data Exchangeprovides streaming replication between business units and companies. API Management with REST/HTTP is not appropriate for streaming data.
Fast Data – Fast Cars: Wie Apache Kafka die Datenwelt revolutioniertconfluent
Für die Automobilindustrie ist die digitale Transformation wie für jede andere Branche zugleich eine digitale Revolution: Neue Marktspieler, neue Technologien und die in immer größeren Mengen anfallenden Daten schaffen neue Chancen, aber auch neue Herausforderungen – und erfordern neben neuen IT-Architekturen auch völlig neue Denkansätze.
60% der Fortune500-Unternehmen setzen zur Umsetzung ihrer Daten-Streaming-Projekte auf die umfassende verteilte Streaming-Plattform Apache Kafka®, darunter auch die AUDI AG.
Erfahren Sie in diesem Webinar:
Wie Kafka als Grundlage sowohl für Daten-Pipelines als auch für Anwendungen dient, die Echtzeit-Datenströme konsumieren und verarbeiten.
Wie Kafka Connect und Kafka Streams geschäftskritische Anwendungen unterstützt
Wie Audi mithilfe von Kafka und Confluent eine Fast Data IoT-Plattform umgesetzt hat, die den Bereich „Connected Car“ revolutioniert
Sprecher:
David Schmitz, Principal Architect, Audi Electronics Venture GmbH
Kai Waehner, Technology Evangelist, Confluent
Data Streaming with Apache Kafka & MongoDB - EMEAAndrew Morgan
A new generation of technologies is needed to consume and exploit today's real time, fast moving data sources. Apache Kafka, originally developed at LinkedIn, has emerged as one of these key new technologies.
This webinar explores the use-cases and architecture for Kafka, and how it integrates with MongoDB to build sophisticated data-driven applications that exploit new sources of data.
Webinar: Data Streaming with Apache Kafka & MongoDBMongoDB
A new generation of technologies is needed to consume and exploit today's real time, fast moving data sources. Apache Kafka, originally developed at LinkedIn, has emerged as one of these key new technologies.
OSACon 2023_ Unlocking Financial Data with Real-Time PipelinesTimothy Spann
OSACon 2023_ Unlocking Financial Data with Real-Time Pipelines
Unlocking Financial Data with Real-Time Pipelines
Financial institutions thrive on accurate and timely data to drive critical decision-making processes, risk assessments, and regulatory compliance. However, managing and processing vast amounts of financial data in real-time can be a daunting task. To overcome this challenge, modern data engineering solutions have emerged, combining powerful technologies like Apache Flink, Apache NiFi, Apache Kafka, and Iceberg to create efficient and reliable real-time data pipelines. In this talk, we will explore how this technology stack can unlock the full potential of financial data, enabling organizations to make data-driven decisions swiftly and with confidence.
Introduction: Financial institutions operate in a fast-paced environment where real-time access to accurate and reliable data is crucial. Traditional batch processing falls short when it comes to handling rapidly changing financial markets and responding to customer demands promptly. In this talk, we will delve into the power of real-time data pipelines, utilizing the strengths of Apache Flink, Apache NiFi, Apache Kafka, and Iceberg, to unlock the potential of financial data.
Key Points to be Covered:
Introduction to Real-Time Data Pipelines: a. The limitations of traditional batch processing in the financial domain. b. Understanding the need for real-time data processing.
Apache Flink: Powering Real-Time Stream Processing: a. Overview of Apache Flink and its role in real-time stream processing. b. Use cases for Apache Flink in the financial industry. c. How Flink enables fast, scalable, and fault-tolerant processing of streaming financial data.
Apache Kafka: Building Resilient Event Streaming Platforms: a. Introduction to Apache Kafka and its role as a distributed streaming platform. b. Kafka's capabilities in handling high-throughput, fault-tolerant, and real-time data streaming. c. Integration of Kafka with financial data sources and consumers.
Apache NiFi: Data Ingestion and Flow Management: a. Overview of Apache NiFi and its role in data ingestion and flow management. b. Data integration and transformation capabilities of NiFi for financial data. c. Utilizing NiFi to collect and process financial data from diverse sources.
Iceberg: Efficient Data Lake Management: a. Understanding Iceberg and its role in managing large-scale data lakes. b. Iceberg's schema evolution and table-level metadata capabilities. c. How Iceberg simplifies data lake management in financial institutions.
Real-World Use Cases: a. Real-time fraud detection using Flink, Kafka, and NiFi. b. Portfolio risk analysis with Iceberg and Flink. c. Streamlined regulatory reporting leveraging all four technologies.
Best Practices and Considerations: a. Architectural considerations when building real-time financial data pipelines. b. Ensuring data integrity, security, and compliance in real-time pipelines. c. Scalability an
Apache Kafka as Data Hub for Crypto, NFT, Metaverse (Beyond the Buzz!)Kai Wähner
Decentralized finance with crypto and NFTs is a huge topic these days. It becomes a powerful combination with the coming metaverse platforms across industries. This session explores the relationship between crypto technologies and modern enterprise architecture.
I discuss how data streaming and Apache Kafka help build innovation and scalable real-time applications of a future metaverse. Let's skip the buzz (and NFT bubble) and instead review existing real-world deployments in the crypto and blockchain world powered by Kafka and its ecosystem.
Budapest Data/ML - Building Modern Data Streaming Apps with NiFi, Flink and K...Timothy Spann
Budapest Data/ML - Building Modern Data Streaming Apps with NiFi, Flink and Kafka
Apache NiFi, Apache Flink, Apache Kafka
Timothy Spann
Principal Developer Advocate
Cloudera
Data in Motion
https://budapestdata.hu/2023/en/speakers/timothy-spann/
Timothy Spann
Principal Developer Advocate
Cloudera (US)
LinkedIn · GitHub · datainmotion.dev
June 8 · Online · English talk
Building Modern Data Streaming Apps with NiFi, Flink and Kafka
In my session, I will show you some best practices I have discovered over the last 7 years in building data streaming applications including IoT, CDC, Logs, and more.
In my modern approach, we utilize several open-source frameworks to maximize the best features of all. We often start with Apache NiFi as the orchestrator of streams flowing into Apache Kafka. From there we build streaming ETL with Apache Flink SQL. We will stream data into Apache Iceberg.
We use the best streaming tools for the current applications with FLaNK. flankstack.dev
BIO
Tim Spann is a Principal Developer Advocate in Data In Motion for Cloudera. He works with Apache NiFi, Apache Pulsar, Apache Kafka, Apache Flink, Flink SQL, Apache Pinot, Trino, Apache Iceberg, DeltaLake, Apache Spark, Big Data, IoT, Cloud, AI/DL, machine learning, and deep learning. Tim has over ten years of experience with the IoT, big data, distributed computing, messaging, streaming technologies, and Java programming.
Previously, he was a Developer Advocate at StreamNative, Principal DataFlow Field Engineer at Cloudera, a Senior Solutions Engineer at Hortonworks, a Senior Solutions Architect at AirisData, a Senior Field Engineer at Pivotal and a Team Leader at HPE. He blogs for DZone, where he is the Big Data Zone leader, and runs a popular meetup in Princeton & NYC on Big Data, Cloud, IoT, deep learning, streaming, NiFi, the blockchain, and Spark. Tim is a frequent speaker at conferences such as ApacheCon, DeveloperWeek, Pulsar Summit and many more. He holds a BS and MS in computer science.
Apache Kafka in the Airline, Aviation and Travel IndustryKai Wähner
Aviation and travel are notoriously vulnerable to social, economic, and political events, as well as the ever-changing expectations of consumers. Coronavirus is just a piece of the challenge.
This presentation explores use cases, architectures, and references for Apache Kafka as event streaming technology in the aviation industry, including airline, airports, global distribution systems (GDS), aircraft manufacturers, and more.
Examples include Lufthansa, Singapore Airlines, Air France Hop, Amadeus, and more. Technologies include Kafka, Kafka Connect, Kafka Streams, ksqlDB, Machine Learning, Cloud, and more.
Mainframe Integration, Offloading and Replacement with Apache KafkaKai Wähner
Video recording of this presentation:
https://youtu.be/upWzamacOVQ
Blog post with more details:
https://www.kai-waehner.de/blog/2020/04/24/mainframe-offloading-replacement-apache-kafka-connect-ibm-db2-mq-cdc-cobol/
Mainframes are still hard at work, processing over 70 percent of the world’s most essential computing transactions every day. Very high cost, monolithic architectures, and missing experts are the key challenges for mainframe applications. Time to get more innovative, even with the mainframe!
Mainframe offloading with Apache Kafka and its ecosystem can be used to keep a more modern data store in real-time sync with the mainframe. At the same time, it is persisting the event data on the bus to enable microservices, and deliver the data to other systems such as data warehouses and search indexes.
But the final goal and ultimate vision are to replace the mainframe by new applications using modern and less costly technologies. Stand up to the dinosaur, but keep in mind that legacy migration is a journey! Kai will guide you to the next step of your company’s evolution!
You will learn:
- how to not only reduce operational expenses but provide a path for architecture modernization, agility and eventually mainframe replacement
- what steps some of Confluent’s customers already took, leveraging technologies like Change Data Capture (CDC) or MQ for mainframe offloading
- how an event streaming platform enables cost reduction, architecture modernization, and a combination of a mainframe with new technologies
Apache Kafka vs. Integration Middleware (MQ, ETL, ESB)Kai Wähner
Learn the differences between an event-driven streaming platform and middleware like MQ, ETL and ESBs – including best practices and anti-patterns, but also how these concepts and tools complement each other in an enterprise architecture.
Extract-Transform-Load (ETL) is still a widely-used pattern to move data between different systems via batch processing. Due to its challenges in today’s world where real time is the new standard, an Enterprise Service Bus (ESB) is used in many enterprises as integration backbone between any kind of microservice, legacy application or cloud service to move data via SOAP / REST Web Services or other technologies. Stream Processing is often added as its own component in the enterprise architecture for correlation of different events to implement contextual rules and stateful analytics. Using all these components introduces challenges and complexities in development and operations.
This session discusses how teams in different industries solve these challenges by building a native streaming platform from the ground up instead of using ETL and ESB tools in their architecture. This allows to build and deploy independent, mission-critical streaming real time application and microservices. The architecture leverages distributed processing and fault-tolerance with fast failover, no-downtime rolling deployments and the ability to reprocess events, so you can recalculate output when your code changes. Integration and Stream Processing are still key functionality but can be realized in real time natively instead of using additional ETL, ESB or Stream Processing tools.
OSSNA Building Modern Data Streaming AppsTimothy Spann
OSSNA
Building Modern Data Streaming Apps
https://ossna2023.sched.com/event/1Jt05/virtual-building-modern-data-streaming-apps-with-open-source-timothy-spann-streamnative
Timothy Spann
Cloudera
Principal Developer Advocate
Data in Motion
In my session, I will show you some best practices I have discovered over the last seven years in building data streaming applications, including IoT, CDC, Logs, and more. In my modern approach, we utilize several open-source frameworks to maximize all the best features. We often start with Apache NiFi as the orchestrator of streams flowing into Apache Pulsar. From there, we build streaming ETL with Apache Spark and enhance events with Pulsar Functions for ML and enrichment. We make continuous queries against our topics with Flink SQL. We will stream data into various open-source data stores, including Apache Iceberg, Apache Pinot, and others. We use the best streaming tools for the current applications with the open source stack - FLiPN. https://www.flipn.app/ Updates: This will be in-person with live coding based on feedback from the crowd. This will also include new data stores, new sources, and data relevant to and from the Vancouver area. This will also include updates to the platforms and inclusion of Apache Iceberg, Apache Pinot and some other new tech.
https://github.com/tspannhw/SpeakerProfile Tim Spann is a Principal Developer Advocate for Cloudera. He works with Apache Kafka, Apache Flink, Flink SQL, Apache NiFi, MiniFi, Apache MXNet, TensorFlow, Apache Spark, Big Data, the IoT, machine learning, and deep learning. Tim has over a decade of experience with the IoT, big data, distributed computing, messaging, streaming technologies, and Java programming. Previously, he was a Principal DataFlow Field Engineer at Cloudera, a Senior Solutions Engineer at Hortonworks, a Senior Solutions Architect at AirisData, a Senior Field Engineer at Pivotal and a Team Leader at HPE. He blogs for DZone, where he is the Big Data Zone leader, and runs a popular meetup in Princeton on Big Data, Cloud, IoT, deep learning, streaming, NiFi, the blockchain, and Spark. Tim is a frequent speaker at conferences such as ApacheCon, DeveloperWeek, Pulsar Summit and many more. He holds a BS and MS in computer science.
Timothy J Spann
Cloudera
Principal Developer Advocate
Hightstown, NJ
Websitehttps://datainmotion.dev/
Similar to Building a Secure, Tamper-Proof & Scalable Blockchain on Top of Apache Kafka - Introduction to AiB's KafkaBlockchain (20)
Catch the Wave: SAP Event-Driven and Data Streaming for the Intelligence Ente...confluent
In our exclusive webinar, you'll learn why event-driven architecture is the key to unlocking cost efficiency, operational effectiveness, and profitability. Gain insights on how this approach differs from API-driven methods and why it's essential for your organization's success.
Unlocking the Power of IoT: A comprehensive approach to real-time insightsconfluent
In today's data-driven world, the Internet of Things (IoT) is revolutionizing industries and unlocking new possibilities. Join Data Reply, Confluent, and Imply as we unveil a comprehensive solution for IoT that harnesses the power of real-time insights.
Workshop híbrido: Stream Processing con Flinkconfluent
El Stream processing es un requisito previo de la pila de data streaming, que impulsa aplicaciones y pipelines en tiempo real.
Permite una mayor portabilidad de datos, una utilización optimizada de recursos y una mejor experiencia del cliente al procesar flujos de datos en tiempo real.
En nuestro taller práctico híbrido, aprenderás cómo filtrar, unir y enriquecer fácilmente datos en tiempo real dentro de Confluent Cloud utilizando nuestro servicio Flink sin servidor.
Industry 4.0: Building the Unified Namespace with Confluent, HiveMQ and Spark...confluent
Our talk will explore the transformative impact of integrating Confluent, HiveMQ, and SparkPlug in Industry 4.0, emphasizing the creation of a Unified Namespace.
In addition to the creation of a Unified Namespace, our webinar will also delve into Stream Governance and Scaling, highlighting how these aspects are crucial for managing complex data flows and ensuring robust, scalable IIoT-Platforms.
You will learn how to ensure data accuracy and reliability, expand your data processing capabilities, and optimize your data management processes.
Don't miss out on this opportunity to learn from industry experts and take your business to the next level.
La arquitectura impulsada por eventos (EDA) será el corazón del ecosistema de MAPFRE. Para seguir siendo competitivas, las empresas de hoy dependen cada vez más del análisis de datos en tiempo real, lo que les permite obtener información y tiempos de respuesta más rápidos. Los negocios con datos en tiempo real consisten en tomar conciencia de la situación, detectar y responder a lo que está sucediendo en el mundo ahora.
Eventos y Microservicios - Santander TechTalkconfluent
Durante esta sesión examinaremos cómo el mundo de los eventos y los microservicios se complementan y mejoran explorando cómo los patrones basados en eventos nos permiten descomponer monolitos de manera escalable, resiliente y desacoplada.
Purpose of the session is to have a dive into Apache, Kafka, Data Streaming and Kafka in the cloud
- Dive into Apache Kafka
- Data Streaming
- Kafka in the cloud
Build real-time streaming data pipelines to AWS with Confluentconfluent
Traditional data pipelines often face scalability issues and challenges related to cost, their monolithic design, and reliance on batch data processing. They also typically operate under the premise that all data needs to be stored in a single centralized data source before it's put to practical use. Confluent Cloud on Amazon Web Services (AWS) provides a fully managed cloud-native platform that helps you simplify the way you build real-time data flows using streaming data pipelines and Apache Kafka.
Q&A with Confluent Professional Services: Confluent Service Meshconfluent
No matter whether you are migrating your Kafka cluster to Confluent Cloud, running a cloud-hybrid environment or are in a different situation where data protection and encryption of sensitive information is required, Confluent Service Mesh allows you to transparently encrypt your data without the need to make code changes to you existing applications.
Citi Tech Talk: Event Driven Kafka Microservicesconfluent
Microservices have become a dominant architectural paradigm for building systems in the enterprise, but they are not without their tradeoffs. Learn how to build event-driven microservices with Apache Kafka
Confluent & GSI Webinars series - Session 3confluent
An in depth look at how Confluent is being used in the financial services industry. Gain an understanding of how organisations are utilising data in motion to solve common problems and gain benefits from their real time data capabilities.
It will look more deeply into some specific use cases and show how Confluent technology is used to manage costs and mitigate risks.
This session is aimed at Solutions Architects, Sales Engineers and Pre Sales, and also the more technically minded business aligned people. Whilst this is not a deeply technical session, a level of knowledge around Kafka would be helpful.
Transforming applications built with traditional messaging solutions such as TIBCO, MQ and Solace to be scalable, reliable and ready for the move to cloud
How can applications built with traditional messaging technologies like TIBCO, Solace and IBM MQ be modernised and be made cloud ready? What are the advantages to Event Streaming approaches to pub/sub vs traditional message queues? What are the strengeths and weaknesses of both approaches, and what use cases and requirements are actually a better fit for messaging than Kafka?
This session will show why the old paradigm does not work and that a new approach to the data strategy needs to be taken. It aims to show how a Data Streaming Platform is integral to the evolution of a company’s data strategy and how Confluent is not just an integration layer but the central nervous system for an organisation
Vous apprendrez également à :
• Créer plus rapidement des produits et fonctionnalités à l’aide d’une suite complète de connecteurs et d’outils de gestion des flux, et à connecter vos environnements à des pipelines de données
• Protéger vos données et charges de travail les plus critiques grâce à des garanties intégrées en matière de sécurité, de gouvernance et de résilience
• Déployer Kafka à grande échelle en quelques minutes tout en réduisant les coûts et la charge opérationnelle associés
Confluent Partner Tech Talk with Synthesisconfluent
A discussion on the arduous planning process, and deep dive into the design/architectural decisions.
Learn more about the networking, RBAC strategies, the automation, and the deployment plan.
Field Employee Tracking System| MiTrack App| Best Employee Tracking Solution|...informapgpstrackings
Keep tabs on your field staff effortlessly with Informap Technology Centre LLC. Real-time tracking, task assignment, and smart features for efficient management. Request a live demo today!
For more details, visit us : https://informapuae.com/field-staff-tracking/
Listen to the keynote address and hear about the latest developments from Rachana Ananthakrishnan and Ian Foster who review the updates to the Globus Platform and Service, and the relevance of Globus to the scientific community as an automation platform to accelerate scientific discovery.
Enhancing Project Management Efficiency_ Leveraging AI Tools like ChatGPT.pdfJay Das
With the advent of artificial intelligence or AI tools, project management processes are undergoing a transformative shift. By using tools like ChatGPT, and Bard organizations can empower their leaders and managers to plan, execute, and monitor projects more effectively.
Paketo Buildpacks : la meilleure façon de construire des images OCI? DevopsDa...Anthony Dahanne
Les Buildpacks existent depuis plus de 10 ans ! D’abord, ils étaient utilisés pour détecter et construire une application avant de la déployer sur certains PaaS. Ensuite, nous avons pu créer des images Docker (OCI) avec leur dernière génération, les Cloud Native Buildpacks (CNCF en incubation). Sont-ils une bonne alternative au Dockerfile ? Que sont les buildpacks Paketo ? Quelles communautés les soutiennent et comment ?
Venez le découvrir lors de cette session ignite
Top Features to Include in Your Winzo Clone App for Business Growth (4).pptxrickgrimesss22
Discover the essential features to incorporate in your Winzo clone app to boost business growth, enhance user engagement, and drive revenue. Learn how to create a compelling gaming experience that stands out in the competitive market.
Understanding Globus Data Transfers with NetSageGlobus
NetSage is an open privacy-aware network measurement, analysis, and visualization service designed to help end-users visualize and reason about large data transfers. NetSage traditionally has used a combination of passive measurements, including SNMP and flow data, as well as active measurements, mainly perfSONAR, to provide longitudinal network performance data visualization. It has been deployed by dozens of networks world wide, and is supported domestically by the Engagement and Performance Operations Center (EPOC), NSF #2328479. We have recently expanded the NetSage data sources to include logs for Globus data transfers, following the same privacy-preserving approach as for Flow data. Using the logs for the Texas Advanced Computing Center (TACC) as an example, this talk will walk through several different example use cases that NetSage can answer, including: Who is using Globus to share data with my institution, and what kind of performance are they able to achieve? How many transfers has Globus supported for us? Which sites are we sharing the most data with, and how is that changing over time? How is my site using Globus to move data internally, and what kind of performance do we see for those transfers? What percentage of data transfers at my institution used Globus, and how did the overall data transfer performance compare to the Globus users?
Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...Globus
The U.S. Geological Survey (USGS) has made substantial investments in meeting evolving scientific, technical, and policy driven demands on storing, managing, and delivering data. As these demands continue to grow in complexity and scale, the USGS must continue to explore innovative solutions to improve its management, curation, sharing, delivering, and preservation approaches for large-scale research data. Supporting these needs, the USGS has partnered with the University of Chicago-Globus to research and develop advanced repository components and workflows leveraging its current investment in Globus. The primary outcome of this partnership includes the development of a prototype enterprise repository, driven by USGS Data Release requirements, through exploration and implementation of the entire suite of the Globus platform offerings, including Globus Flow, Globus Auth, Globus Transfer, and Globus Search. This presentation will provide insights into this research partnership, introduce the unique requirements and challenges being addressed and provide relevant project progress.
Unleash Unlimited Potential with One-Time Purchase
BoxLang is more than just a language; it's a community. By choosing a Visionary License, you're not just investing in your success, you're actively contributing to the ongoing development and support of BoxLang.
How Recreation Management Software Can Streamline Your Operations.pptxwottaspaceseo
Recreation management software streamlines operations by automating key tasks such as scheduling, registration, and payment processing, reducing manual workload and errors. It provides centralized management of facilities, classes, and events, ensuring efficient resource allocation and facility usage. The software offers user-friendly online portals for easy access to bookings and program information, enhancing customer experience. Real-time reporting and data analytics deliver insights into attendance and preferences, aiding in strategic decision-making. Additionally, effective communication tools keep participants and staff informed with timely updates. Overall, recreation management software enhances efficiency, improves service delivery, and boosts customer satisfaction.
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...Globus
The Earth System Grid Federation (ESGF) is a global network of data servers that archives and distributes the planet’s largest collection of Earth system model output for thousands of climate and environmental scientists worldwide. Many of these petabyte-scale data archives are located in proximity to large high-performance computing (HPC) or cloud computing resources, but the primary workflow for data users consists of transferring data, and applying computations on a different system. As a part of the ESGF 2.0 US project (funded by the United States Department of Energy Office of Science), we developed pre-defined data workflows, which can be run on-demand, capable of applying many data reduction and data analysis to the large ESGF data archives, transferring only the resultant analysis (ex. visualizations, smaller data files). In this talk, we will showcase a few of these workflows, highlighting how Globus Flows can be used for petabyte-scale climate analysis.
Gamify Your Mind; The Secret Sauce to Delivering Success, Continuously Improv...Shahin Sheidaei
Games are powerful teaching tools, fostering hands-on engagement and fun. But they require careful consideration to succeed. Join me to explore factors in running and selecting games, ensuring they serve as effective teaching tools. Learn to maintain focus on learning objectives while playing, and how to measure the ROI of gaming in education. Discover strategies for pitching gaming to leadership. This session offers insights, tips, and examples for coaches, team leads, and enterprise leaders seeking to teach from simple to complex concepts.
Enhancing Research Orchestration Capabilities at ORNL.pdfGlobus
Cross-facility research orchestration comes with ever-changing constraints regarding the availability and suitability of various compute and data resources. In short, a flexible data and processing fabric is needed to enable the dynamic redirection of data and compute tasks throughout the lifecycle of an experiment. In this talk, we illustrate how we easily leveraged Globus services to instrument the ACE research testbed at the Oak Ridge Leadership Computing Facility with flexible data and task orchestration capabilities.
Navigating the Metaverse: A Journey into Virtual Evolution"Donna Lenk
Join us for an exploration of the Metaverse's evolution, where innovation meets imagination. Discover new dimensions of virtual events, engage with thought-provoking discussions, and witness the transformative power of digital realms."
Providing Globus Services to Users of JASMIN for Environmental Data AnalysisGlobus
JASMIN is the UK’s high-performance data analysis platform for environmental science, operated by STFC on behalf of the UK Natural Environment Research Council (NERC). In addition to its role in hosting the CEDA Archive (NERC’s long-term repository for climate, atmospheric science & Earth observation data in the UK), JASMIN provides a collaborative platform to a community of around 2,000 scientists in the UK and beyond, providing nearly 400 environmental science projects with working space, compute resources and tools to facilitate their work. High-performance data transfer into and out of JASMIN has always been a key feature, with many scientists bringing model outputs from supercomputers elsewhere in the UK, to analyse against observational or other model data in the CEDA Archive. A growing number of JASMIN users are now realising the benefits of using the Globus service to provide reliable and efficient data movement and other tasks in this and other contexts. Further use cases involve long-distance (intercontinental) transfers to and from JASMIN, and collecting results from a mobile atmospheric radar system, pushing data to JASMIN via a lightweight Globus deployment. We provide details of how Globus fits into our current infrastructure, our experience of the recent migration to GCSv5.4, and of our interest in developing use of the wider ecosystem of Globus services for the benefit of our user community.
Globus Connect Server Deep Dive - GlobusWorld 2024Globus
We explore the Globus Connect Server (GCS) architecture and experiment with advanced configuration options and use cases. This content is targeted at system administrators who are familiar with GCS and currently operate—or are planning to operate—broader deployments at their institution.
AI Pilot Review: The World’s First Virtual Assistant Marketing SuiteGoogle
AI Pilot Review: The World’s First Virtual Assistant Marketing Suite
👉👉 Click Here To Get More Info 👇👇
https://sumonreview.com/ai-pilot-review/
AI Pilot Review: Key Features
✅Deploy AI expert bots in Any Niche With Just A Click
✅With one keyword, generate complete funnels, websites, landing pages, and more.
✅More than 85 AI features are included in the AI pilot.
✅No setup or configuration; use your voice (like Siri) to do whatever you want.
✅You Can Use AI Pilot To Create your version of AI Pilot And Charge People For It…
✅ZERO Manual Work With AI Pilot. Never write, Design, Or Code Again.
✅ZERO Limits On Features Or Usages
✅Use Our AI-powered Traffic To Get Hundreds Of Customers
✅No Complicated Setup: Get Up And Running In 2 Minutes
✅99.99% Up-Time Guaranteed
✅30 Days Money-Back Guarantee
✅ZERO Upfront Cost
See My Other Reviews Article:
(1) TubeTrivia AI Review: https://sumonreview.com/tubetrivia-ai-review
(2) SocioWave Review: https://sumonreview.com/sociowave-review
(3) AI Partner & Profit Review: https://sumonreview.com/ai-partner-profit-review
(4) AI Ebook Suite Review: https://sumonreview.com/ai-ebook-suite-review
AI Pilot Review: The World’s First Virtual Assistant Marketing Suite
Building a Secure, Tamper-Proof & Scalable Blockchain on Top of Apache Kafka - Introduction to AiB's KafkaBlockchain
1. Building a Secure, Tamper-Proof &
Scalable Blockchain on Top of Apache
Kafka
Kai Waehner
Technology Evangelist
Confluent
kai.waehner@confluent.io
LinkedIn
@KaiWaehner
confluent.io
kai-waehner.de
Stephen Reed
CEO, Co-founder
AiB
stephen@ai-blockchain.com
LinkedIn
ai-blockchain.com
2. Building a Secure, Tamper-Proof & Scalable Blockchain on Top of Apache Kafka
1. Use Cases for Secure and Tamper-Proof Data Processing
2. Distributed Ledger and Blockchain Technologies
3. Event Streaming with Apache Kafka
4. Relation between Kafka and Blockchain
5. A Tamper-Proof Blockchain implemented with Kafka
6. Live Demo
Agenda
3. Building a Secure, Tamper-Proof & Scalable Blockchain on Top of Apache Kafka
PRODUCTS
Built FIRST DocuSign Blockchain
product.
Allows for the writing of contracts on the
blockchain with proof of trust
Software agents keep the blockchain
tamper proof and immutable
Current POC’s across eCommerce,
Healthcare and Supply Chain
SOLUTIONS
• Artificial Intelligence, Blockchain, Ecommerce,
Healthcare, Media, Payments and Supply Chain
• Securely store commercial and non-commercial
transactions
• Energy efficient and ESG
GROWTH OPPORTUNITIES
How do we expand globally with new customers
using AiB products?
AiB offers services for scalability to meet all of the
demands of global businesses offering architecture
assessment diagrams to clients.
TECHNOLOGY
We employ one of a kind Artificial Agents on
the Blockchain to give world class security
and to ensure all data is immutable.
PLATFORM
Its most visible success is in the unique
approach to financial transactions through its
implementation of its Artificial Intelligent
Blockchain Cloud Payment Platform.
Enterprise
Grade
Platform
COMPANY
Since its founding in 2014,
Ai-Blockchain has built a
combination of innovative IP
and blockchain products
that have been recognized in
the marketplace and USPTO.
Potential
Customer
Use Cases
and
Problems
Ai-Blockchain
Products
Ai-Blockchain
Solutions and
advantages
Industry
Expansion
Ai-Blockchain – Patented Private-Permissioned Blockchain
for Smart Contracts
4. Building a Secure, Tamper-Proof & Scalable Blockchain on Top of Apache Kafka
4
CUSTOMER CLIENT EXPERIENCE WITH AiB:
● Immediate go-to-market product execution
using blockchain
● Artificial Intelligence and Machine Learning
recommendation systems
Ai-Blockchain – Patented Private-Permissioned Blockchain
for Smart Contracts
5. Building a Secure, Tamper-Proof & Scalable Blockchain on Top of Apache Kafka
Ai-Blockchain – Co-Founders
6
6. Building a Secure, Tamper-Proof & Scalable Blockchain on Top of Apache Kafka
Blockchain for Supply Chain Management
(Manufacturing, Logistics, …)
7
https://coletiv.com/blog/blockchain-supply-chain-management-system/
https://www.disruptordaily.com/blockchain-use-cases-supply-chain-management/
7. Building a Secure, Tamper-Proof & Scalable Blockchain on Top of Apache Kafka
Blockchain in Healthcare
8
• Use Cases
• Management of patient data,
especially access control
• RVU Agreements with Ai-Blockchain
• Management and use of academic
research data
• Logistics / Supply chain for drugs,
medical technology, and other
products
• Often no need for Blockchain /
DLT
• Only for Identity Management
• Processing and (big) data storage
outside blockchain
https://digiforest.io/en/blog/blockchain-examples-in-healthcare
8. Building a Secure, Tamper-Proof & Scalable Blockchain on Top of Apache Kafka
Blockchain for
Financial Services
9
https://www.accenture.com/t20170120T074124Z__w__/us-en/_acnmedia/Accenture/Conversion-Assets/DotCom/Documents/Global/PDF/Consulting/Accenture-Banking-on-Blockchain.pdf%23zoom=50
9. Building a Secure, Tamper-Proof & Scalable Blockchain on Top of Apache Kafka
1. Use Cases for Secure and Tamper-Proof Data Processing
2. Distributed Ledger and Blockchain Technologies
3. Event Streaming with Apache Kafka
4. Relation between Kafka and Blockchain
5. A Tamper-Proof Blockchain implemented with Kafka by
AiB
6. Live Demo
Agenda
10. Building a Secure, Tamper-Proof & Scalable Blockchain on Top of Apache Kafka
Bitcoin – The Tip of the Iceberg!
11
‘Bitcoin’ compared to
‘blockchain’, ‘distributed
ledger’, etc.
11. Building a Secure, Tamper-Proof & Scalable Blockchain on Top of Apache Kafka
Blockchain vs. Distributed Ledger Technology
(DLT)
12
https://www.da.ventures/post/gaining-clarity-on-key-terminology-bitcoin-versus-blockchain-versus-distributed-ledger-technology
12. Building a Secure, Tamper-Proof & Scalable Blockchain on Top of Apache Kafka
Blockchain
14
https://www.investopedia.com/terms/b/blockchain.asp
13. Building a Secure, Tamper-Proof & Scalable Blockchain on Top of Apache Kafka
Blockchain Concepts
• Permissioned or permissionless (or consortium)
• Consensus algorithm (Proof of work, Proof of Stake, or Voting systems)
• Blockchain: Global Consensus across all nodes
• DLT: Consensus without having to validate across the entire Blockchain
• Blockchain
• Growing list of records, called blocks, linked using cryptography
• Each block contains a cryptographic hash of the previous block, a timestamp, and transaction data
16
14. Building a Secure, Tamper-Proof & Scalable Blockchain on Top of Apache Kafka
Challenges and Concerns of Blockchains
• Technical and
organizational Complexity
• Transaction speed
• Energy consumption
• Security
• Data tenancy
• Lifecycle costs
• Ease of integration
15. Building a Secure, Tamper-Proof & Scalable Blockchain on Top of Apache Kafka
Purpose-built solutions
• Ripple: Payment settling, currency exchange and remittance system
• Libra: Global payment system and financial infrastructure
• IOTA: IoT transactions
Non-Blockchain DLTs
to solve the challenges to provide better performance and scalability
18
Open and flexible solution for many use
cases
• Apache Kafka: Open, scalable, real time
event streaming
Technology Partners
• Ai-Blockchain: “AiB” USA based
16. Building a Secure, Tamper-Proof & Scalable Blockchain on Top of Apache Kafka
1. Use Cases for Secure and Tamper-Proof Data Processing
2. Distributed Ledger and Blockchain Technologies
3. Event Streaming with Apache Kafka
4. Relation between Kafka and Blockchain
5. A Tamper-Proof Blockchain implemented with Kafka
6. Live Demo
Agenda
17. Building a Secure, Tamper-Proof & Scalable Blockchain on Top of Apache Kafka
Event Streaming in the Finance Industry
Check past Kafka Summit videos for details about the use cases:
https://kafka-summit.org/past-events/
18. Building a Secure, Tamper-Proof & Scalable Blockchain on Top of Apache Kafka
Tamper-Proof Event Streaming for
(some of the) Use Cases in the Finance Industry
Real Time Processing Digital Transformation Strategic Goals
Short-Sale Risk
Calculation / Trade
Approval
Mainframe Offloading
and Replacement
Instant Payment
Next-Best
Offer
Robot Process Automation
(e.g. Know Your Customer,
KYC)
Customer Service
(e.g. Chat Bots)
IT
ModernizationRegulatory
Reporting
Fraud Detection
Anomaly Detection
Across Assets and Locations
Derivatives
Pricing Compliance
Trading
Post-Processin
g
Strategic
Planning and
Simulations
19. Building a Secure, Tamper-Proof & Scalable Blockchain on Top of Apache Kafka
STREAM
PROCESSING
Create and store
materialized views
Filter
Analyze in-flight
Time
C CC
Event Streaming
20. Building a Secure, Tamper-Proof & Scalable Blockchain on Top of Apache Kafka
Event Streaming Platform –
The Immutable Commit Log
Time
P
C1 C2
C3
21. Building a Secure, Tamper-Proof & Scalable Blockchain on Top of Apache Kafka
Event Streaming Platform –
A Distributed System for 24/7 and Zero Data Loss
Broker 1
Topic1
partition1
Broker 2 Broker 3 Broker 4
Topic1
partition1
Topic1
partition1
Leader Follower
Topic1
partition2
Topic1
partition2
Topic1
partition2
Topic1
partition3
Topic1
partition4
Topic1
partition3
Topic1
partition3
Topic1
partition4
Topic1
partition4
22. Building a Secure, Tamper-Proof & Scalable Blockchain on Top of Apache Kafka
An Event Streaming Platform
is the Underpinning of an Event-driven Architecture
Microservices
Mainframes
SaaS apps
Mobile
Customer 360
Real-time fraud
detection
Data warehouse
Producers
Consumers
Database
change
Microservices
events
SaaS
data
Customer
experiences
Streams of real time events
Stream processing apps
Connectors
Connectors
Stream processing apps
23. Building a Secure, Tamper-Proof & Scalable Blockchain on Top of Apache Kafka
Kafka Connect
with Ethereum Connector
Kafka Cluster
Blockchain
Integration
Domain-Driven Design and Decoupled Applications
Legacy
Integration
Custom
Application
ETL Tool
with Mainframe Connector
Java / C++ /
Go / Python / KSQL
Schema
Registry
Event Streaming Platform
Blockchain
Domain
Payment Domain Fraud Domain
Audit Logs,
RBAC, etc.
24. Building a Secure, Tamper-Proof & Scalable Blockchain on Top of Apache Kafka
Reasons for Event Streaming with Apache Kafka
Real Time
Scalable
High Throughput
Cost Reduction
24/7 – Zero downtime, zero data loss
Decoupling – Storage, Domain-driven Design
Data (re-)processing and stateful client applications
Integration – Connectivity to IoT, legacy, big data, everything
Hybrid Architecture – On Premises, multi cloud, edge computing
Fully managed cloud
No vendor locking
29
25. Building a Secure, Tamper-Proof & Scalable Blockchain on Top of Apache Kafka
Kafka Clusters
can stretch over
Regions
Zero Downtime + Zero Data loss
(RPO=0 and RTO=0)
e.g. Stretched over US East + Mid +
West
Automate Disaster Recovery
Sync or Async Replication per Topic
Offset Preserving
Automated Client Failover without
Custom Code
Multi-Region Cluster
(Only available in Confluent Platform)
31
26. Building a Secure, Tamper-Proof & Scalable Blockchain on Top of Apache Kafka
Example of a Multi-Region Cluster in a Bank
Large FinServ Customer
Payment
Log
Payment
Log
Location Location
synchronous
asynchronous
● ‘Payment’ transactions enter
from us-east and us-west with
fully synchronous replication
● ‘Log’ and ‘Location’ information
in the same cluster use async -
optimized for latency
● Automated disaster recovery
(zero downtime, zero data loss)
Result: Clearing time from ‘deposit’ to
‘available’ goes from 5 days to 5
seconds (including security checks)
(Only available in Confluent Platform)
27. Building a Secure, Tamper-Proof & Scalable Blockchain on Top of Apache Kafka
1. Use Cases for Secure and Tamper-Proof Data Processing
2. Distributed Ledger and Blockchain Technologies
3. Event Streaming with Apache Kafka
4. Relation between Kafka and Blockchain
5. A Tamper-Proof Blockchain implemented with Kafka
6. Live Demo
Agenda
28. Building a Secure, Tamper-Proof & Scalable Blockchain on Top of Apache Kafka
Real Time
High Throughput
Decentralized database
Distributed log of records
Immutable log
Replication
High availability
Decoupling of applications /
clients
Role-based access control to
data
Tamper-Proof
Encrypted payloads
is not a Blockchain!
X
29. Building a Secure, Tamper-Proof & Scalable Blockchain on Top of Apache Kafka
Kafka AND Blockchain
37
Kafka
Real Time
Instant Payment
App
(Java, C++,
Python, etc.)
Batch
Analytics Platform
(Spark, Splunk,
etc.)
Blockchain
(Bitcoin)
Blockchain
(Ethereum)
Kafka-native
Blockchain
Kafka AS Blockchain
Real Time
Instant Payment
App
(Java, C++,
Python, etc.)
Batch
Analytics Platform
(Spark, Splunk,
etc.)
30. Building a Secure, Tamper-Proof & Scalable Blockchain on Top of Apache Kafka
1. Use Cases for Secure and Tamper-Proof Data Processing
2. Distributed Ledger and Blockchain Technologies
3. Event Streaming with Apache Kafka
4. Relation between Kafka and Blockchain
5. A Tamper-Proof Blockchain implemented with Kafka
6. Live Demo
Agenda
31. Building a Secure, Tamper-Proof & Scalable Blockchain on Top of Apache Kafka
32. Building a Secure, Tamper-Proof & Scalable Blockchain on Top of Apache Kafka
AiB’s BRICK Messaging Platform
For security, privacy and scalability, the AiB’s BRICK
platform is a platform agnostic layer that is built for
enterprise and micro payments in the cloud.
Third-Party Audited
AiB’s technology stack has been
security audited and stress tested by
third-party global cloud providers.
Full-Stack Capability
Deploying nodes and writing smart contracts
up to front end development and deployment.
Thought Leaders
Consistently asked to
speak at international
events to share our
Blockchain expertise.
Blockchain Experts
Established in 2014
Ai Blockchain’s team of over
100 Blockchain developers &
experts have been building
solutions for over six years
and are ready today for any
and all market
opportunities.
Ai-Blockchain – Proprietary Blockchain Technology Stack
33. Building a Secure, Tamper-Proof & Scalable Blockchain on Top of Apache Kafka
41
KafkaBlockchain Architecture
Producers reuse the KafkaBlockchain
library and Tamper-Evident Records.
Consumers reuse the KafkaBlockchain library and Tamper-Evident Records.
34. Building a Secure, Tamper-Proof & Scalable Blockchain on Top of Apache Kafka
KafkaBlockchain – Producer, Consumer & Verifier
Open Source Apache
Licensed on GitHub
35. Building a Secure, Tamper-Proof & Scalable Blockchain on Top of Apache Kafka
43
36. Building a Secure, Tamper-Proof & Scalable Blockchain on Top of Apache Kafka
1. Use Cases for Secure and Tamper-Proof Data Processing
2. Distributed Ledger and Blockchain Technologies
3. Event Streaming with Apache Kafka
4. Relation between Kafka and Blockchain
5. A Tamper-Proof Blockchain implemented with Kafka
6. Live Demo
Agenda
37. Building a Secure, Tamper-Proof & Scalable Blockchain on Top of Apache Kafka
Demonstration: run-kafka-blockchain.demo.sh
• Demonstrate putting messages into a Kafka blockchain with one partition, one producer
and one consumer.
• Describe the single blockchain partition
• Demonstrate verifying messages with one consumer
Demonstration: run-kafka-blockchain-multiple-partition.demo.sh
• Demonstrate putting messages into a Kafka blockchain with five partitions, one producer
and one consumer.
• Describe the five blockchain partitions
• Demonstrate verifying messages with five cooperating consumers - one for each
partition.
45
KafkaBlockchain – Live Demo
38. Building a Secure, Tamper-Proof & Scalable Blockchain on Top of Apache Kafka
When to use Kafka vs. a “real blockchain
framework”?
46
Use Apache Kafka with
AIBlockchain for
• Enterprise infrastructure
• Open, scalable, real-time
requirements
• Flexible architectures for
many use cases
Use Hyperledger, Ethereum,
et al. for
• Deployment over various
independent organizations
• Participants verify the
distributed ledger contents
themselves.
• Specific use cases
• Server-side managed and
controlled by multiple
organizations
• Scenarios where the
business value overturns the
added complexity and
project risk
Use Kafka and
Blockchain together to
combine the benefits of
both (only where this
makes sense!)
39. Building a Secure, Tamper-Proof & Scalable Blockchain on Top of Apache Kafka
What about long-term storage in Kafka?
48
40. Building a Secure, Tamper-Proof & Scalable Blockchain on Top of Apache Kafka
Today, Kafka works well
for recent events, short
horizon storage, and
manual data balancing
Kafka’s present-day design offers
extraordinarily low messaging latency by
storing topic data on fast disks that are
collocated with brokers. This is usually good.
But sometimes, you need to store a huge
amount of data for a long time.
Blockchain is such a use case!
Kafka
Processing
App
Storage
Transactions, auth,
quota
enforcement,
compaction, ...
41. Building a Secure, Tamper-Proof & Scalable Blockchain on Top of Apache Kafka
Confluent Tiered Storage for Kafka
Object Store
Processing Storage
Transactions,
auth, quota
enforcement,
compaction, ...
Local
Remote
Kafka
Apps
Store Forever – AiB’s Tamper-Proof
Blockchain with KafkaBlockchain
Older data is offloaded to inexpensive object storage,
permitting it to be consumed at any time. Using AiB storage
can be made tamper-proof and immutable
Save $$$
Storage limitations, like capacity and duration, are effectively
uncapped.
Instantaneously scale up and down
Your Kafka clusters will be able to automatically self-balance
load and hence elastically scale
(Only available in Confluent Platform)