Technical breakout during Confluent’s streaming event in Munich, presented by Sam Julian, Chief Cloud Engineer at E.On SE. This three-day hands-on course focused on how to build, manage, and monitor clusters using industry best-practices developed by the world’s foremost Apache Kafka™ experts. The sessions focused on how Kafka and the Confluent Platform work, how their main subsystems interact, and how to set up, manage, monitor, and tune your cluster.
Jay Kreps is the CEO of Confluent, Inc., a company backing the popular Apache Kafka® messaging system. Prior to founding Confluent, he was formerly the lead architect for data infrastructure at LinkedIn. He is among the original authors of several open source projects including Project Voldemort (a key-value store). Apache Kafka (a distributed messaging system) and Apache Samza (a stream processing system).
From legacy systems to microservices and back | Andera Gioia, QuantycaHostedbyConfluent
Legacy systems are the kings of our IT architecture. They rule the evolution of the technology ecosystem that hosts them thanks to the control they have acquired over time on core data and key business processes. Legacy systems however pose severe limits in addressing critical business needs, as well as seizing opportunities for future growth.
Digital Integration Hub (DIH) is a quite popular architecture to modernize legacy systems allowing the IT to evolve faster and better integrate modern technologies.
In this talk we’ll:
- Review DIH general architecture
- Illustrate different patterns to offload data from legacy systems to Kafka with pros and cons in term of consistency and latency
- Present different ways to consume data offloaded from legacy system to Kafka and their fit with different use cases
- Show how to close the loop and use Kafka and CQRS pattern to handle all the data moving backward from applications to legacy systems
IIoT with Kafka and Machine Learning for Supply Chain Optimization In Real Ti...Kai Wähner
I did a webinar with Confluent's partner Expero about "Apache Kafka and Machine Learning for Real Time Supply Chain Optimization". This is a great example for anybody in automation industry / Industrial IoT (IIoT) like automotive, manufacturing, logistics, etc.
We explain how a real time event streaming platform can integrate in real time with the legacy world and proprietary IIoT protocols (like Siemens S7, Modbus, Beckhoff ADS, OPC-UA, et al). You can process the data at scale and then ingest it into a modern database (like AWS S3, Snowflake or MongoDB) or analytic / machine learning framework (like TensorFlow, PyTorch or Azure Machine Learning Service).
Removing performance bottlenecks with Kafka Monitoring and topic configurationKnoldus Inc.
Apache Kafka is a distributed messaging system used to build real-time data pipelines & streaming applications. Since applications rely heavily on efficient data transfer, message passing platforms like Kafka cannot afford a breakdown or poor performance.
But how do we ensure that Kafka is running well and successfully streaming messages at low latency? This is where Kafka monitoring steps in.
Here’s the agenda of the webinar -
> Why Kafka monitoring?
> Top 10 Kafka metrics to focus on
> How to change Kafka topic configuration at runtime?
Building a real-time pipeline from scratch that is able to handle billion+ transactions per day, store, analyze and visualize it all in real-time has never been easier. In this build-as-we-go talk, we’ll create a front-to-back architecture that does exactly that.
* we’ll start with a simple producer emitting a few messages and publishing them onto a Kafka queue
* on consuming end of the queue a Spark-based Streamliner process will pick them up and store in MemSQL
* ZoomData will connect to MemSQL for real-time visualization where we’ll be able to ask various questions and see answers change as data is flowing through the system
* we’ll quickly make the entire pipeline more complex by increasing the amount of data as well as complexity of the data, until reaching 100K transactions per second
As we walk through this demo, we will touch on cross data-center Kafka and MemSQL set-ups, speed limitations if any as well as echo back to real-life use cases of a similar set-up used in Goldman’s Asset Management division for the purposes of Portfolio Management & Trading.
Event Driven Architecture with a RESTful Microservices Architecture (Kyle Ben...confluent
Tinder’s Quickfire Pipeline powers all things data at Tinder. It was originally built using AWS Kinesis Firehoses and has since been extended to use both Kafka and other event buses. It is the core of Tinder’s data infrastructure. This rich data flow of both client and backend data has been extended to service a variety of needs at Tinder, including Experimentation, ML, CRM, and Observability, allowing backend developers easier access to shared client side data. We perform this using many systems, including Kafka, Spark, Flink, Kubernetes, and Prometheus. Many of Tinder’s systems were natively designed in an RPC first architecture.
Things we’ll discuss decoupling your system at scale via event-driven architectures include:
– Powering ML, backend, observability, and analytical applications at scale, including an end to end walk through of our processes that allow non-programmers to write and deploy event-driven data flows.
– Show end to end the usage of dynamic event processing that creates other stream processes, via a dynamic control plane topology pattern and broadcasted state pattern
– How to manage the unavailability of cached data that would normally come from repeated API calls for data that’s being backfilled into Kafka, all online! (and why this is not necessarily a “good” idea)
– Integrating common OSS frameworks and libraries like Kafka Streams, Flink, Spark and friends to encourage the best design patterns for developers coming from traditional service oriented architectures, including pitfalls and lessons learned along the way.
– Why and how to avoid overloading microservices with excessive RPC calls from event-driven streaming systems
– Best practices in common data flow patterns, such as shared state via RocksDB + Kafka Streams as well as the complementary tools in the Apache Ecosystem.
– The simplicity and power of streaming SQL with microservices
Technical breakout during Confluent’s streaming event in Munich, presented by Sam Julian, Chief Cloud Engineer at E.On SE. This three-day hands-on course focused on how to build, manage, and monitor clusters using industry best-practices developed by the world’s foremost Apache Kafka™ experts. The sessions focused on how Kafka and the Confluent Platform work, how their main subsystems interact, and how to set up, manage, monitor, and tune your cluster.
Jay Kreps is the CEO of Confluent, Inc., a company backing the popular Apache Kafka® messaging system. Prior to founding Confluent, he was formerly the lead architect for data infrastructure at LinkedIn. He is among the original authors of several open source projects including Project Voldemort (a key-value store). Apache Kafka (a distributed messaging system) and Apache Samza (a stream processing system).
From legacy systems to microservices and back | Andera Gioia, QuantycaHostedbyConfluent
Legacy systems are the kings of our IT architecture. They rule the evolution of the technology ecosystem that hosts them thanks to the control they have acquired over time on core data and key business processes. Legacy systems however pose severe limits in addressing critical business needs, as well as seizing opportunities for future growth.
Digital Integration Hub (DIH) is a quite popular architecture to modernize legacy systems allowing the IT to evolve faster and better integrate modern technologies.
In this talk we’ll:
- Review DIH general architecture
- Illustrate different patterns to offload data from legacy systems to Kafka with pros and cons in term of consistency and latency
- Present different ways to consume data offloaded from legacy system to Kafka and their fit with different use cases
- Show how to close the loop and use Kafka and CQRS pattern to handle all the data moving backward from applications to legacy systems
IIoT with Kafka and Machine Learning for Supply Chain Optimization In Real Ti...Kai Wähner
I did a webinar with Confluent's partner Expero about "Apache Kafka and Machine Learning for Real Time Supply Chain Optimization". This is a great example for anybody in automation industry / Industrial IoT (IIoT) like automotive, manufacturing, logistics, etc.
We explain how a real time event streaming platform can integrate in real time with the legacy world and proprietary IIoT protocols (like Siemens S7, Modbus, Beckhoff ADS, OPC-UA, et al). You can process the data at scale and then ingest it into a modern database (like AWS S3, Snowflake or MongoDB) or analytic / machine learning framework (like TensorFlow, PyTorch or Azure Machine Learning Service).
Removing performance bottlenecks with Kafka Monitoring and topic configurationKnoldus Inc.
Apache Kafka is a distributed messaging system used to build real-time data pipelines & streaming applications. Since applications rely heavily on efficient data transfer, message passing platforms like Kafka cannot afford a breakdown or poor performance.
But how do we ensure that Kafka is running well and successfully streaming messages at low latency? This is where Kafka monitoring steps in.
Here’s the agenda of the webinar -
> Why Kafka monitoring?
> Top 10 Kafka metrics to focus on
> How to change Kafka topic configuration at runtime?
Building a real-time pipeline from scratch that is able to handle billion+ transactions per day, store, analyze and visualize it all in real-time has never been easier. In this build-as-we-go talk, we’ll create a front-to-back architecture that does exactly that.
* we’ll start with a simple producer emitting a few messages and publishing them onto a Kafka queue
* on consuming end of the queue a Spark-based Streamliner process will pick them up and store in MemSQL
* ZoomData will connect to MemSQL for real-time visualization where we’ll be able to ask various questions and see answers change as data is flowing through the system
* we’ll quickly make the entire pipeline more complex by increasing the amount of data as well as complexity of the data, until reaching 100K transactions per second
As we walk through this demo, we will touch on cross data-center Kafka and MemSQL set-ups, speed limitations if any as well as echo back to real-life use cases of a similar set-up used in Goldman’s Asset Management division for the purposes of Portfolio Management & Trading.
Event Driven Architecture with a RESTful Microservices Architecture (Kyle Ben...confluent
Tinder’s Quickfire Pipeline powers all things data at Tinder. It was originally built using AWS Kinesis Firehoses and has since been extended to use both Kafka and other event buses. It is the core of Tinder’s data infrastructure. This rich data flow of both client and backend data has been extended to service a variety of needs at Tinder, including Experimentation, ML, CRM, and Observability, allowing backend developers easier access to shared client side data. We perform this using many systems, including Kafka, Spark, Flink, Kubernetes, and Prometheus. Many of Tinder’s systems were natively designed in an RPC first architecture.
Things we’ll discuss decoupling your system at scale via event-driven architectures include:
– Powering ML, backend, observability, and analytical applications at scale, including an end to end walk through of our processes that allow non-programmers to write and deploy event-driven data flows.
– Show end to end the usage of dynamic event processing that creates other stream processes, via a dynamic control plane topology pattern and broadcasted state pattern
– How to manage the unavailability of cached data that would normally come from repeated API calls for data that’s being backfilled into Kafka, all online! (and why this is not necessarily a “good” idea)
– Integrating common OSS frameworks and libraries like Kafka Streams, Flink, Spark and friends to encourage the best design patterns for developers coming from traditional service oriented architectures, including pitfalls and lessons learned along the way.
– Why and how to avoid overloading microservices with excessive RPC calls from event-driven streaming systems
– Best practices in common data flow patterns, such as shared state via RocksDB + Kafka Streams as well as the complementary tools in the Apache Ecosystem.
– The simplicity and power of streaming SQL with microservices
Building Stateful applications on Streaming Platforms | Premjit Mishra, Dell ...HostedbyConfluent
Can and should Apache Kafka replace a database? How long can and should I store data in Kafka? How can I query and process data in Kafka? These are common questions that come up more and more. This session explains the idea behind databases and different features like storage, queries, transactions, and processing to evaluate when Kafka is a good fit, and when it is not. The discussion includes different Kafka-native add-ons like Tiered Storage for long-term, cost-efficient storage, and ksqlDB as an event streaming database. The relation and trade-offs between Kafka and other databases are explored to complement each other instead of thinking about a replacement. This includes different options for pull and push-based bi-directional integration.
Ramon Marquez, Confluent, Solutions Engineer
Veremos lo que realmente es Apache Kafka, la solución de facto del streaming de datos, utilizada por empresas de la talla de Spotify, Netflix, Linkedin entre otras.
https://www.meetup.com/Mexico-Kafka/events/275757361/
Building Event-Driven Services with Apache Kafkaconfluent
Should you use REST to sew services together? Is it better to use a richer, brokered protocol? This practical talk will dig into how we piece services together in event driven systems, how we we use a distributed log to create a central, persistent narrative and what benefits we reap from doing so.
Introduction to Apache Kafka and Confluent... and why they matter!Paolo Castagna
This is a short introduction to Apache Kafka and Confluent (the company founded by the creator of Kafka). The slides cover Apache Kafka APIs including Kafka Connect and Kafka Streams (part of Apache Kafka). Other open source, ASL licensed, projects are mentioned: #KSQL, Schema Registry, REST Proxy, etc.
Many thanks to Codemotion and Seacom for hosting the event.
Caching for Microservices Architectures: Session II - Caching PatternsVMware Tanzu
In the first webinar of the series we covered the importance of caching in microservice-based application architectures—in addition to improving performance it also aids in making content available from legacy systems, promotes loose coupling and team autonomy, and provides air gaps that can limit failures from cascading through a system.
To reap these benefits, though, the right caching patterns must be employed. In this webinar, we will examine various caching patterns and shed light on how they deliver the capabilities needed by our microservices. What about rapidly changing data, and concurrent updates to data? What impact do these and other factors have to various use cases and patterns?
Understanding data access patterns, covered in this webinar, will help you make the right decisions for each use case. Beyond the simplest of use cases, caching can be tricky business—join us for this webinar to see how best to use them.
Jagdish Mirani, Cornelia Davis, Michael Stolz, Pulkit Chandra, Pivotal
Top 5 Event Streaming Use Cases for 2021 with Apache KafkaKai Wähner
Apache Kafka and Event Streaming are two of the most relevant buzzwords in tech these days. Ever wonder what the predicted TOP 5 Event Streaming Architectures and Use Cases for 2021 are? Check out the following presentation. Learn about edge deployments, hybrid and multi-cloud architectures, service mesh-based microservices, streaming machine learning, and cybersecurity.
On-demand video recording: https://videos.confluent.io/watch/XAjxV3j8hzwCcEKoZVErUJ
Death of the dumb pipes: Using Apache Kafka® for Integration projectsHostedbyConfluent
Guru Sattanathan, Confluent, Senior Solutions Engineer
Enterprise Integration technologies (aka Middleware) are the key enablers when it comes to Real-time data flows or Event Driven Architecture. Starting from real-time payments, e-commerce, travel booking systems, etc, everything is powered by a middleware underneath. It did transform a lot of things but with caveats!
Are ESB’s & MQ’s enough for today’s integration needs? Do you know their technical debts?
If you are someone looking at integrating your applications or an Integration Architect this session is for you. It's time to refresh yourself and see how organizations are building integrations today.
In this session, we will go in this order:
-Recap on Enterprise Integration technologies
-What are the key flaws & What needs improvement?
-What is Apache Kafka?
-Rethinking Integration using Apache Kafka
https://www.meetup.com/KafkaMelbourne/events/280590162/
Fan-out, fan-in & the multiplexer: Replication recipes for global platform di...HostedbyConfluent
This session will dive into our most successful (and unsuccessful!) multi-cluster event replication patterns.
An x-ray of the cross cluster distribution model that powers our globally distributed APIs will touch on the benefits that this model has provided in terms of client API experience, delivery agility and developer experience.
We will focus on recipes for effective use of Mirror Maker event replication to power platform distribution including the challenges of managing a 'fan in' event replication workflow - pulling events created in satellite clusters back to a mothership cluster for processing.
We will introduce the elegant technique of replication event multiplexing - which can be used to simplify the burden of managing a 'fan-in' replication topology by eliminating regional awareness from the application domain and improving replication health monitoring & observability.
Data Transformations on Ops Metrics using Kafka Streams (Srividhya Ramachandr...confluent
How Priceline uses Kafka Streams technology to effectively save TBs on daily licenses of our monitoring systems. Kafka Streams powers a big part of our analytics and monitoring pipelines and delivers operational metrics transformations in real time. All logs and operational metrics from all of the APIs of Priceline’s products flow into Kafka and is ingested into our Monitoring System Splunk for Alerting and Monitoring. We have now implemented data transformations, aggregations and summarizations using Kafka Streams technologies to effectively eliminate PCI/PII violations on the log data; do aggregations on metrics to avoid ingesting sub-second metrics and ingest metrics only at the granularity that we need to. We will cover the need for custom Serdes, custom partitioners, and why we don’t use the confluent registry. You will also learn how Priceline uses a self service model to configure its streams, topics and consumers using Data Collection Console, which is our UI for managing the Kafka streaming pipelines.
user Behavior Analysis with Session Windows and Apache Kafka's Streams APIconfluent
For many industries the need to group together related events based on a period of activity or inactivity is key. Advertising businesses, content producers are just a few examples of where session windows can be used to better understand user behavior.
While such sessionization has been possible in Apache Kafka up to this point, implementing it has been rather complex and required leveraging low-level APIs. In the most recent release of Kafka, however, new capabilities have been added making session windows much easier to implement.
In this online talk, we’ll introduce the concept of a session window, talk about common use cases, and walk through how Apache Kafka can be used for session-oriented use cases.
How Yelp Leapt to Microservices with More than a Message Queueconfluent
Without seeing what’s wrong with today’s messaging queues, it can be initially confusing to view Apache Kafka as more. By adding additional functionality, true storage, and guarantees it opens opportunities to take full advantage of a publish/subscribe model.
Joined by Yelp’s Justin Cunningham we’ll see how their infrastructure has quickly evolved. Powered by Kafka, Yelp has made the leap to microservices and is seeing the benefits of efficiency and performance.
Speakers:
Justin Cunningham
Technical Lead, Software Engineer, Yelp
Gehrig Kunz
Technical Product Marketing Manager, Confluent
DataOps Automation for a Kafka Streaming Platform (Andrew Stevenson + Spiros ...HostedbyConfluent
DataOps challenges us to build data experiences in a repeatable way. For those with Kafka, this means finding a means of deploying flows in an automated and consistent fashion.
The challenge is to make the deployment of Kafka flows consistent across different technologies and systems: the topics, the schemas, the monitoring rules, the credentials, the connectors, the stream processing apps. And ideally not coupled to a particular infrastructure stack.
In this talk we will discuss the different approaches and benefits/disadvantages to automating the deployment of Kafka flows including Git operators and Kubernetes operators. We will walk through and demo deploying a flow on AWS EKS with MSK and Kafka Connect using GitOps practices: including a stream processing application, S3 connector with credentials held in AWS Secrets Manager.
Set your Data in Motion with Confluent & Apache Kafka Tech Talk Series LMEconfluent
Confluent Platform is supporting London Metal Exchange’s Kafka Centre of Excellence across a number of projects with the main objective to provide a reliable, resilient, scalable and overall efficient Kafka as a Service model to the teams across the entire London Metal Exchange estate.
Project Ouroboros: Using StreamSets Data Collector to Help Manage the StreamS...Pat Patterson
On a typical day we see hundreds of downloads of StreamSets Data Collector, our open source data integration tool. We used to wrangle our download logs using a combination of the AWS S3 command line, sed, grep, awk and other tools, all run from a shell script (on my laptop!) once a week. This was a classic example of a brittle, hard to maintain, custom data integration. One day it dawned on me, "This is crazy, we have a tool that can do all this!". In this session, I'll explain how I built a dataflow pipeline to stream content delivery network (CDN) logs from S3 to MySQL in real-time, allowing us to gain valuable insights into our open source community. You'll also learn how we use the same techniques to not only gain insights into our community on Slack, but also build tools to better serve them.
How we eased out security journey with OAuth (Goodbye Kerberos!) | Paul Makka...HostedbyConfluent
Saxo Bank is on a growth journey and Kafka is a critical component to that success. Securing our financial event streams is a top priority for us and initially we started with an on-prem Kafka cluster secured with (the de-facto) Kerberos. However, as we modernize and scale, the demands of hybrid cloud, multiple domains, polyglot computing and Data Mesh require us to also modernize our approach to security. In this talk, we will describe how we took the default (non-production ready) Kafka OAuth implementation and productionized it to work with Kafka in Azure Cloud, including the Kafka stack and clients. By enabling both Kerberos and OAuth running on-prem and in the cloud, we now plan to gracefully retire Kerberos from our estate.
How to Discover, Visualize, Catalog, Share and Reuse your Kafka Streams (Jona...HostedbyConfluent
As Kafka deployments grow within your organization, so do the challenges around lifecycle management. For instance, do you really know what streams exist, who is producing and consuming them? What is the effect of upstream changes? How is this information kept up to date, so it is relevant and consistent to others looking to reuse these streams? Ever wish you had a way to view and visualize graphically the relationships between schemas, topics and applications? In this talk we will show you how to do that and get more value from your Kafka Streaming infrastructure using an event portal. It’s like an API portal but specialized for event streams and publish/subscribe patterns. Join us to see how you can automatically discover event streams from your Kafka clusters, import them to a catalog and then leverage code gen capabilities to ease development of new applications.
Apache Kafka® and Analytics in a Connected IoT Worldconfluent
Apache Kafka® and Analytics in a Connected IoT World, Kai Waehner, Sr. Solutions Engineer Advanced Technology Group, Confluent
https://www.meetup.com/Berlin-Apache-Kafka-Meetup-by-Confluent/events/273166575/
Partner Development Guide for Kafka Connectconfluent
This guide is intended to provide useful background to developers implementing Kafka Connect sources and sinks for their data stores. Visit www.confluent.io for more information.
Building Stateful applications on Streaming Platforms | Premjit Mishra, Dell ...HostedbyConfluent
Can and should Apache Kafka replace a database? How long can and should I store data in Kafka? How can I query and process data in Kafka? These are common questions that come up more and more. This session explains the idea behind databases and different features like storage, queries, transactions, and processing to evaluate when Kafka is a good fit, and when it is not. The discussion includes different Kafka-native add-ons like Tiered Storage for long-term, cost-efficient storage, and ksqlDB as an event streaming database. The relation and trade-offs between Kafka and other databases are explored to complement each other instead of thinking about a replacement. This includes different options for pull and push-based bi-directional integration.
Ramon Marquez, Confluent, Solutions Engineer
Veremos lo que realmente es Apache Kafka, la solución de facto del streaming de datos, utilizada por empresas de la talla de Spotify, Netflix, Linkedin entre otras.
https://www.meetup.com/Mexico-Kafka/events/275757361/
Building Event-Driven Services with Apache Kafkaconfluent
Should you use REST to sew services together? Is it better to use a richer, brokered protocol? This practical talk will dig into how we piece services together in event driven systems, how we we use a distributed log to create a central, persistent narrative and what benefits we reap from doing so.
Introduction to Apache Kafka and Confluent... and why they matter!Paolo Castagna
This is a short introduction to Apache Kafka and Confluent (the company founded by the creator of Kafka). The slides cover Apache Kafka APIs including Kafka Connect and Kafka Streams (part of Apache Kafka). Other open source, ASL licensed, projects are mentioned: #KSQL, Schema Registry, REST Proxy, etc.
Many thanks to Codemotion and Seacom for hosting the event.
Caching for Microservices Architectures: Session II - Caching PatternsVMware Tanzu
In the first webinar of the series we covered the importance of caching in microservice-based application architectures—in addition to improving performance it also aids in making content available from legacy systems, promotes loose coupling and team autonomy, and provides air gaps that can limit failures from cascading through a system.
To reap these benefits, though, the right caching patterns must be employed. In this webinar, we will examine various caching patterns and shed light on how they deliver the capabilities needed by our microservices. What about rapidly changing data, and concurrent updates to data? What impact do these and other factors have to various use cases and patterns?
Understanding data access patterns, covered in this webinar, will help you make the right decisions for each use case. Beyond the simplest of use cases, caching can be tricky business—join us for this webinar to see how best to use them.
Jagdish Mirani, Cornelia Davis, Michael Stolz, Pulkit Chandra, Pivotal
Top 5 Event Streaming Use Cases for 2021 with Apache KafkaKai Wähner
Apache Kafka and Event Streaming are two of the most relevant buzzwords in tech these days. Ever wonder what the predicted TOP 5 Event Streaming Architectures and Use Cases for 2021 are? Check out the following presentation. Learn about edge deployments, hybrid and multi-cloud architectures, service mesh-based microservices, streaming machine learning, and cybersecurity.
On-demand video recording: https://videos.confluent.io/watch/XAjxV3j8hzwCcEKoZVErUJ
Death of the dumb pipes: Using Apache Kafka® for Integration projectsHostedbyConfluent
Guru Sattanathan, Confluent, Senior Solutions Engineer
Enterprise Integration technologies (aka Middleware) are the key enablers when it comes to Real-time data flows or Event Driven Architecture. Starting from real-time payments, e-commerce, travel booking systems, etc, everything is powered by a middleware underneath. It did transform a lot of things but with caveats!
Are ESB’s & MQ’s enough for today’s integration needs? Do you know their technical debts?
If you are someone looking at integrating your applications or an Integration Architect this session is for you. It's time to refresh yourself and see how organizations are building integrations today.
In this session, we will go in this order:
-Recap on Enterprise Integration technologies
-What are the key flaws & What needs improvement?
-What is Apache Kafka?
-Rethinking Integration using Apache Kafka
https://www.meetup.com/KafkaMelbourne/events/280590162/
Fan-out, fan-in & the multiplexer: Replication recipes for global platform di...HostedbyConfluent
This session will dive into our most successful (and unsuccessful!) multi-cluster event replication patterns.
An x-ray of the cross cluster distribution model that powers our globally distributed APIs will touch on the benefits that this model has provided in terms of client API experience, delivery agility and developer experience.
We will focus on recipes for effective use of Mirror Maker event replication to power platform distribution including the challenges of managing a 'fan in' event replication workflow - pulling events created in satellite clusters back to a mothership cluster for processing.
We will introduce the elegant technique of replication event multiplexing - which can be used to simplify the burden of managing a 'fan-in' replication topology by eliminating regional awareness from the application domain and improving replication health monitoring & observability.
Data Transformations on Ops Metrics using Kafka Streams (Srividhya Ramachandr...confluent
How Priceline uses Kafka Streams technology to effectively save TBs on daily licenses of our monitoring systems. Kafka Streams powers a big part of our analytics and monitoring pipelines and delivers operational metrics transformations in real time. All logs and operational metrics from all of the APIs of Priceline’s products flow into Kafka and is ingested into our Monitoring System Splunk for Alerting and Monitoring. We have now implemented data transformations, aggregations and summarizations using Kafka Streams technologies to effectively eliminate PCI/PII violations on the log data; do aggregations on metrics to avoid ingesting sub-second metrics and ingest metrics only at the granularity that we need to. We will cover the need for custom Serdes, custom partitioners, and why we don’t use the confluent registry. You will also learn how Priceline uses a self service model to configure its streams, topics and consumers using Data Collection Console, which is our UI for managing the Kafka streaming pipelines.
user Behavior Analysis with Session Windows and Apache Kafka's Streams APIconfluent
For many industries the need to group together related events based on a period of activity or inactivity is key. Advertising businesses, content producers are just a few examples of where session windows can be used to better understand user behavior.
While such sessionization has been possible in Apache Kafka up to this point, implementing it has been rather complex and required leveraging low-level APIs. In the most recent release of Kafka, however, new capabilities have been added making session windows much easier to implement.
In this online talk, we’ll introduce the concept of a session window, talk about common use cases, and walk through how Apache Kafka can be used for session-oriented use cases.
How Yelp Leapt to Microservices with More than a Message Queueconfluent
Without seeing what’s wrong with today’s messaging queues, it can be initially confusing to view Apache Kafka as more. By adding additional functionality, true storage, and guarantees it opens opportunities to take full advantage of a publish/subscribe model.
Joined by Yelp’s Justin Cunningham we’ll see how their infrastructure has quickly evolved. Powered by Kafka, Yelp has made the leap to microservices and is seeing the benefits of efficiency and performance.
Speakers:
Justin Cunningham
Technical Lead, Software Engineer, Yelp
Gehrig Kunz
Technical Product Marketing Manager, Confluent
DataOps Automation for a Kafka Streaming Platform (Andrew Stevenson + Spiros ...HostedbyConfluent
DataOps challenges us to build data experiences in a repeatable way. For those with Kafka, this means finding a means of deploying flows in an automated and consistent fashion.
The challenge is to make the deployment of Kafka flows consistent across different technologies and systems: the topics, the schemas, the monitoring rules, the credentials, the connectors, the stream processing apps. And ideally not coupled to a particular infrastructure stack.
In this talk we will discuss the different approaches and benefits/disadvantages to automating the deployment of Kafka flows including Git operators and Kubernetes operators. We will walk through and demo deploying a flow on AWS EKS with MSK and Kafka Connect using GitOps practices: including a stream processing application, S3 connector with credentials held in AWS Secrets Manager.
Set your Data in Motion with Confluent & Apache Kafka Tech Talk Series LMEconfluent
Confluent Platform is supporting London Metal Exchange’s Kafka Centre of Excellence across a number of projects with the main objective to provide a reliable, resilient, scalable and overall efficient Kafka as a Service model to the teams across the entire London Metal Exchange estate.
Project Ouroboros: Using StreamSets Data Collector to Help Manage the StreamS...Pat Patterson
On a typical day we see hundreds of downloads of StreamSets Data Collector, our open source data integration tool. We used to wrangle our download logs using a combination of the AWS S3 command line, sed, grep, awk and other tools, all run from a shell script (on my laptop!) once a week. This was a classic example of a brittle, hard to maintain, custom data integration. One day it dawned on me, "This is crazy, we have a tool that can do all this!". In this session, I'll explain how I built a dataflow pipeline to stream content delivery network (CDN) logs from S3 to MySQL in real-time, allowing us to gain valuable insights into our open source community. You'll also learn how we use the same techniques to not only gain insights into our community on Slack, but also build tools to better serve them.
How we eased out security journey with OAuth (Goodbye Kerberos!) | Paul Makka...HostedbyConfluent
Saxo Bank is on a growth journey and Kafka is a critical component to that success. Securing our financial event streams is a top priority for us and initially we started with an on-prem Kafka cluster secured with (the de-facto) Kerberos. However, as we modernize and scale, the demands of hybrid cloud, multiple domains, polyglot computing and Data Mesh require us to also modernize our approach to security. In this talk, we will describe how we took the default (non-production ready) Kafka OAuth implementation and productionized it to work with Kafka in Azure Cloud, including the Kafka stack and clients. By enabling both Kerberos and OAuth running on-prem and in the cloud, we now plan to gracefully retire Kerberos from our estate.
How to Discover, Visualize, Catalog, Share and Reuse your Kafka Streams (Jona...HostedbyConfluent
As Kafka deployments grow within your organization, so do the challenges around lifecycle management. For instance, do you really know what streams exist, who is producing and consuming them? What is the effect of upstream changes? How is this information kept up to date, so it is relevant and consistent to others looking to reuse these streams? Ever wish you had a way to view and visualize graphically the relationships between schemas, topics and applications? In this talk we will show you how to do that and get more value from your Kafka Streaming infrastructure using an event portal. It’s like an API portal but specialized for event streams and publish/subscribe patterns. Join us to see how you can automatically discover event streams from your Kafka clusters, import them to a catalog and then leverage code gen capabilities to ease development of new applications.
Apache Kafka® and Analytics in a Connected IoT Worldconfluent
Apache Kafka® and Analytics in a Connected IoT World, Kai Waehner, Sr. Solutions Engineer Advanced Technology Group, Confluent
https://www.meetup.com/Berlin-Apache-Kafka-Meetup-by-Confluent/events/273166575/
Partner Development Guide for Kafka Connectconfluent
This guide is intended to provide useful background to developers implementing Kafka Connect sources and sinks for their data stores. Visit www.confluent.io for more information.
Strata+Hadoop 2017 San Jose - The Rise of Real Time: Apache Kafka and the Str...confluent
The move to streaming architectures from batch processing is a revolution in how companies use data. But what is the state of the union for stream processing, and what gaps remain in the technology we have? How will this technology impact the architectures and applications of the future? Jay Kreps explores the future of Apache Kafka and the stream processing ecosystem.
Strata+Hadoop 2017 San Jose: Lessons from a year of supporting Apache Kafkaconfluent
The number of deployments of Apache Kafka at enterprise scale has greatly increased in the years since Kafka’s original development in 2010. Along with this rapid growth has come a wide variety of use cases and deployment strategies that transcend what Kafka’s creators imagined when they originally developed the technology. As the scope and reach of streaming data platforms based on Apache Kafka has grown, the need to understand monitoring and troubleshooting strategies has as well.
Dustin Cote and Ryan Pridgeon share their experience supporting Apache Kafka at enterprise-scale and explore monitoring and troubleshooting techniques to help you avoid pitfalls when scaling large-scale Kafka deployments.
Topics include:
- Effective use of JMX for Kafka
- Tools for preventing small problems from becoming big ones
- Efficient architectures proven in the wild
- Finding and storing the right information when it all goes wrong
Visit www.confluent.io for more information.
Monitoring Apache Kafka with Confluent Control Center confluent
Presentation by Nick Dearden, Direct, Product and Engineering, Confluent
It’s 3 am. Do you know how your Kafka cluster is doing?
With over 150 metrics to think about, operating a Kafka cluster can be daunting, particularly as a deployment grows. Confluent Control Center is the only complete monitoring and administration product for Apache Kafka and is designed specifically for making the Kafka operators life easier.
Join Confluent as we cover how Control Center is used to simplify deployment, operability, and ensure message delivery.
Watch the recording: https://www.confluent.io/online-talk/monitoring-and-alerting-apache-kafka-with-confluent-control-center/
Distributed stream processing with Apache Kafkaconfluent
A modern business operates 24/7 and generates data continuously. Shouldn’t we process it continuously too?
A rich ecosystem of real-time data-processing frameworks, tools and systems has been forming around Apache Kafka that allows data to be processed continuously as it occurs. Jay Kreps will introduce Kafka and explain why it has become the de facto standard for streaming data. He will draw on practical experience building stream-processing applications to discuss the difference between architectures and the challenges each presents. Jay will then outline the Kafka Streams API, which offers new stream processing functionality in Kafka, and explain how it helps tame some of the complexity in real-time architectures.
Visit www.confluent.io for more information
Data Pipelines Made Simple with Apache Kafkaconfluent
Presentation by Ewen Cheslack-Postava, Engineer, Apache Kafka Committer, Confluent
In streaming workloads, often times data produced at the source is not useful down the pipeline or it requires some transformation to get it into usable shape. Similarly, where sensitive data is concerned, filtering of topics is helpful to ensure that the wrong data doesn't get to the wrong place.
The newest release of Apache Kafka now offers the ability to do transformations on individual messages, making is possible to implement finer grained transformations customized to your unique needs. In this session we’ll talk about the new single message transform capabilities, how to use them to implement things like data masking and advanced partitioning, and when you’ll need to use more complex tools like the Kafka Streams API instead.
What's new in Confluent 3.2 and Apache Kafka 0.10.2 confluent
With the introduction of connect and streams API in 2016, Apache Kafka is becoming the defacto solution for anyone looking to build a streaming platform. The community continues to add additional capabilities to make it the complete solution for streaming data.
Join us as we review the latest additions in Apache Kafka 0.10.2. In addition, we’ll cover what’s new in Confluent Enterprise 3.2 that makes it possible for running Kafka at scale.
Power of the Log: LSM & Append Only Data Structuresconfluent
This talk is about the beauty of sequential access and append-only data structures. We'll do this in the context of a little-known paper entitled “Log Structured Merge Trees”. LSM describes a surprisingly counterintuitive approach to storing and accessing data in a sequential fashion. It came to prominence in Google's Big Table paper and today, the use of Logs, LSM and append-only data structures drive many of the world's most influential storage systems: Cassandra, HBase, RocksDB, Kafka and more. Finally, we'll look at how the beauty of sequential access goes beyond database internals, right through to how applications communicate, share data and scale.
A Practical Guide to Selecting a Stream Processing Technology confluent
Presented by Michael Noll, Product Manager, Confluent.
Why are there so many stream processing frameworks that each define their own terminology? Are the components of each comparable? Why do you need to know about spouts or DStreams just to process a simple sequence of records? Depending on your application’s requirements, you may not need a full framework at all.
Processing and understanding your data to create business value is the ultimate goal of a stream data platform. In this talk we will survey the stream processing landscape, the dimensions along which to evaluate stream processing technologies, and how they integrate with Apache Kafka. Particularly, we will learn how Kafka Streams, the built-in stream processing engine of Apache Kafka, compares to other stream processing systems that require a separate processing infrastructure.
In the last few years, Apache Kafka has been used extensively in enterprises for real-time data collecting, delivering, and processing. In this presentation, Jun Rao, Co-founder, Confluent, gives a deep dive on some of the key internals that help make Kafka popular.
- Companies like LinkedIn are now sending more than 1 trillion messages per day to Kafka. Learn about the underlying design in Kafka that leads to such high throughput.
- Many companies (e.g., financial institutions) are now storing mission critical data in Kafka. Learn how Kafka supports high availability and durability through its built-in replication mechanism.
- One common use case of Kafka is for propagating updatable database records. Learn how a unique feature called compaction in Apache Kafka is designed to solve this kind of problem more naturally.
A stream processing platform is not an island unto itself; it must be connected to all of your existing data systems, applications, and sources. In this talk we will provide different options for integrating systems and applications with Apache Kafka, with a focus on the Kafka Connect framework and the ecosystem of Kafka connectors. We will discuss the intended use cases for Kafka Connect and share our experience and best practices for building large-scale data pipelines using Apache Kafka.
Paolo Castagna is a Senior Sales Engineer at Confluent. His background is on 'big data' and he has, first hand, saw the shift happening in the industry from batch to stream processing and from big data to fast data. His talk will introduce Kafka Streams and explain why Apache Kafka is a great option and simplification for stream processing.
The Data Dichotomy- Rethinking the Way We Treat Data and Servicesconfluent
Presenter: Ben Stopford, Engineer, Confluent
Services come with a problem: they’re not well suited to sharing data. This talk will examine the underlying dichotomy we all face as we piece such systems together. One that is not well served today. The solution lies in blending the old with the new and Apache Kafka plays a central role.
Modern businesses have data at their core, and this data is changing continuously. How can we harness this torrent of information in real-time? The answer is stream processing, and the technology that has since become the core platform for streaming data is Apache Kafka. Among the thousands of companies that use Kafka to transform and reshape their industries are the likes of Netflix, Uber, PayPal, and AirBnB, but also established players such as Goldman Sachs, Cisco, and Oracle.
Unfortunately, today’s common architectures for real-time data processing at scale suffer from complexity: there are many technologies that need to be stitched and operated together, and each individual technology is often complex by itself. This has led to a strong discrepancy between how we, as engineers, would like to work vs. how we actually end up working in practice.
In this session we talk about how Apache Kafka helps you to radically simplify your data processing architectures. We cover how you can now build normal applications to serve your real-time processing needs — rather than building clusters or similar special-purpose infrastructure — and still benefit from properties such as high scalability, distributed computing, and fault-tolerance, which are typically associated exclusively with cluster technologies. Notably, we introduce Kafka’s Streams API, its abstractions for streams and tables, and its recently introduced Interactive Queries functionality. As we will see, Kafka makes such architectures equally viable for small, medium, and large scale use cases.
When it Absolutely, Positively, Has to be There: Reliability Guarantees in Ka...confluent
In the financial industry, losing data is unacceptable. Financial firms are adopting Kafka for their critical applications. Kafka provides the low latency, high throughput, high availability, and scale that these applications require. But can it also provide complete reliability? As a system architect, when asked “Can you guarantee that we will always get every transaction,” you want to be able to say “Yes” with total confidence.
In this session, we will go over everything that happens to a message – from producer to consumer, and pinpoint all the places where data can be lost – if you are not careful. You will learn how developers and operation teams can work together to build a bulletproof data pipeline with Kafka. And if you need proof that you built a reliable system – we’ll show you how you can build the system to prove this too.
View the recording:
http://hortonworks.com/webinar/accelerating-real-time-data-ingest-hadoop/
Hadoop didn’t disrupt the data center. The exploding amounts of data did. But, let’s face it, if you can’t move your data to Hadoop, then you can’t use it in Hadoop. The experts from Hortonworks, the #1 leader in Hadoop development, and Attunity, a leading data management software provider, cover:
- How to ingest your most valuable data into Hadoop using Attunity Replicate
- About how customers are using Hortonworks DataFlow (HDF) powered by Apache NiFi
- How to combine the real-time change data capture (CDC) technology with connected data platforms from Hortonworks
We discuss how Attunity Replicate and Hortonworks Data Flow (HDF) work together to move data into Hadoop.
Leveraging Mainframe Data for Modern Analyticsconfluent
“The mainframe is going away” is as true now as it was 10, 20 and 30 years ago. Mainframes are still crucial in handling critical business transactions, they were however built for an era where batch data movement was the norm and can be difficult to integrate into today’s data-driven, real-time, analytics-focused business processes as well as the environments that support them. Until now.
Join experts from Confluent, Attunity, and Capgemini for a one-hour online talk session where you’ll learn how to:
Unlock your mainframe data with unique change data capture (CDC) functionality without incurring the complexity and expense that come with sending ongoing queries into the mainframe database
How using CDC benefits advanced analytics approaches such as deep machine learning and predictive analytics
Deliver ongoing streams of data in real-time to the most demanding analytics environments
Ensure that your analytics environment includes the broadest possible range of data sources and destinations while ensuring true enterprise-grade functionality
Identify use cases that can help you get started delivering value to the business moving from POC to Pilot to Production
Streaming Data Ingest and Processing with Apache KafkaAttunity
Apache™ Kafka is a fast, scalable, durable, and fault-tolerant
publish-subscribe messaging system. It offers higher throughput, reliability and replication. To manage growing data volumes, many companies are leveraging Kafka for streaming data ingest and processing.
Join experts from Confluent, the creators of Apache™ Kafka, and the experts at Attunity, a leader in data integration software, for a live webinar where you will learn how to:
-Realize the value of streaming data ingest with Kafka
-Turn databases into live feeds for streaming ingest and processing
-Accelerate data delivery to enable real-time analytics
-Reduce skill and training requirements for data ingest
The recorded webinar on slide 32 includes a demo using automation software (Attunity Replicate) to stream live changes from a database into Kafka and also includes a Q&A with our experts.
For more information, please go to www.attunity.com/kafka.
Transform Your Mainframe Data for the Cloud with Precisely and Apache KafkaPrecisely
Your mainframe does hard work for your business, supporting essential computing transactions every day. However, mainframe data does not easily integrate with the cloud platforms driving data-driven, real-time, analytics-focused business processes. Integrating data from this critical technology often results in high costs and downtime. So, what can you do?
View this on-demand webinar to learn how Precisely Connect can help use the power of Apache Kafka to eliminate data silos and make cloud-based, event-driven data architectures a reality. Start your cloud transformation journey today, knowing you don’t need to leave essential transaction data behind!
During this webinar, you will learn more about:
· Where to begin your cloud transformation journey using mainframe data and Apache Kafka
· What you need to move mainframe data to the cloud while reducing costs, modernizing architectures, and using the staff you have today
· How Precisely Connect customers are using change data capture and Apache Kafka to deliver real-time insights to the cloud
Streaming Time Series Data With Kenny Gorman and Elena Cuevas | Current 2022HostedbyConfluent
Streaming Time Series Data With Kenny Gorman and Elena Cuevas | Current 2022
Modern streaming use cases are generating massive amounts of data - much of it needs to be organized and queried over time. The sheer amount and complexity of this problem presents new challenges for data engineers and developers alike.
To solve this problem Apache Kafka and MongoDB Time Series collections are a powerful combination. In this talk, Kenny Gorman and Elena Cuevas will present how Apache Kafka on Confluent Cloud can stream massive amounts of data to Time Series Collections via the MongoDB Connector for Apache Kafka. Elena and Kenny will discuss the required configuration details and critical components of Confluent Cloud and MongoDB Atlas as well as some tips, tricks and best practices.
You will leave armed with the knowledge of how Confluent Cloud, Apache Kafka, MongoDB Atlas, and Time Series collections fit into your event-driven architecture.
Apache Kafka Use Cases_ When To Use It_ When Not To Use_.pdfNoman Shaikh
In today's data-driven world, the need for real-time data streaming and processing has become paramount. Apache Kafka, an open-source distributed event streaming platform, has emerged as a fundamental technology in meeting this demand.
Modern Cloud-Native Streaming Platforms: Event Streaming Microservices with A...confluent
Microservices, events, containers, and orchestrators are dominating our vernacular today. As operations teams adapt to support these technologies in production, cloud-native platforms like Pivotal Cloud Foundry and Kubernetes have quickly risen to serve as force multipliers of automation, productivity and value.
Apache Kafka® is providing developers a critically important component as they build and modernize applications to cloud-native architecture.
This talk will explore:
• Why cloud-native platforms and why run Apache Kafka on Kubernetes?
• What kind of workloads are best suited for this combination?
• Tips to determine the path forward for legacy monoliths in your application portfolio
• Demo: Running Apache Kafka as a Streaming Platform on Kubernetes
Au delà des brokers, un tour de l’environnement Kafka | Florent Ramièreconfluent
During the Confluent Streaming event in Paris, Florent Ramière, Technical Account Manager at Confluent, goes beyond brokers, introducing a whole new ecosystem with Kafka Streams, KSQL, Kafka Connect, Rest proxy, Schema Registry, MirrorMaker, etc.
Dataservices - Processing Big Data The Microservice WayJosef Adersberger
We see a big data processing pattern emerging using the Microservice approach to build an integrated, flexible, and distributed system of data processing tasks. We call this the Dataservice pattern. In this presentation we'll introduce into Dataservices: their basic concepts, the technology typically in use (like Kubernetes, Kafka, Cassandra and Spring) and some architectures from real-life.
Introduction to GCP DataFlow PresentationKnoldus Inc.
In this session, we will learn about how Dataflow is a fully managed streaming analytics service that minimizes latency, processing time, and cost through autoscaling and batch processing.
Solution Brief: Real-Time Pipeline AcceleratorBlueData, Inc.
Get started with Spark Streaming, Kafka, and Cassandra for real-time data analytics.
BlueData makes it easy to deploy Spark infrastructure and applications on- premises. The BlueData EPIC software platform is purpose-built to simplify and accelerate the deployment of Spark, Hadoop, and other tools for Big Data analytics—leveraging Docker containers and virtualized infrastructure.
Our new Real-Time Pipeline Accelerator solution provides the software and professional services you need for building data pipelines in a multi-tenant environment for Spark Streaming, Kafka, and Cassandra. With help from the BlueData team, you’ll also have two end-to-end real-time data pipelines as a starting point.
Learn more about BlueData at www.bluedata.com
Best Practices for Building Hybrid-Cloud Architectures | Hans Jespersenconfluent
Best Practices for building Hybrid-Cloud Architectures - Hans Jespersen
Afternoon opening presentation during Confluent’s streaming event in Paris, presented by Hans Jespersen, VP WW Systems Engineering at Confluent.
Introducing Events and Stream Processing into Nationwide Building Society (Ro...confluent
Facing Open Banking regulation, rapidly increasing transaction volumes and increasing customer expectations, Nationwide took the decision to take load off their back-end systems through real-time streaming of data changes into Kafka. Hear about how Nationwide started their journey with Kafka, from their initial use case of creating a real-time data cache using Change Data Capture, Kafka and Microservices to how Kafka allowed them to build a stream processing backbone used to reengineer the entire banking experience including online banking, payment processing and mortgage applications. See a working demo of the system and what happens to the system when the underlying infrastructure breaks. Technologies covered include: Change Data Capture, Kafka (Avro, partitioning and replication) and using KSQL and Kafka Streams Framework to join topics and process data.
Delivering the power of data using Spring Cloud DataFlow and DataStax Enterpr...VMware Tanzu
SpringOne Platform 2017
Gilbert Lau, Data Stax; Wayne Lund, Pivotal
"Spring Cloud Data Flow satisfies all of the demands of modern streaming and task workloads. A growing number of customers are viewing Pivotal Cloud Foundry as an ideal runtime for these types of workloads to take advantage of all of the microservice architecture features of Spring Boot apps leveraging Spring Cloud Services. This is only half of the equation. Once the streaming data is persisted on their database, our customers want to generate actionable insights to provide the best customer experience to stay on top of the competitive marketplace. DataStax Enterprise (DSE) is a single and unified big data platform with Apache Cassandra NoSQL database at its core. Integrated within each node of DSE is powerful indexing, search through Apache Solr, analytics through Apache Spark, and a enterprise-ready graph functionality. It is by far the only operational data platform which can scale linearly in excess of 1,000 nodes, with no single point of failure, and is capable of providing real-time active-everywhere replication across many datacenters and cloud providers.
In this presentation and demo we will take a common social data set and show SCDF advantages on PCF for microservice scaling and pipelining data into a DataStax Enterprise Cassandra NoSQL database. Then followed by extracting meaningful information through DataStax Enterprise Search, DataStax Enterprise Analytics, and DataStax Cassandra Service Broker Tile for PCF using a Spring Boot Dashboard application."
Similar to Confluent & Attunity: Mainframe Data Modern Analytics (20)
Catch the Wave: SAP Event-Driven and Data Streaming for the Intelligence Ente...confluent
In our exclusive webinar, you'll learn why event-driven architecture is the key to unlocking cost efficiency, operational effectiveness, and profitability. Gain insights on how this approach differs from API-driven methods and why it's essential for your organization's success.
Unlocking the Power of IoT: A comprehensive approach to real-time insightsconfluent
In today's data-driven world, the Internet of Things (IoT) is revolutionizing industries and unlocking new possibilities. Join Data Reply, Confluent, and Imply as we unveil a comprehensive solution for IoT that harnesses the power of real-time insights.
Workshop híbrido: Stream Processing con Flinkconfluent
El Stream processing es un requisito previo de la pila de data streaming, que impulsa aplicaciones y pipelines en tiempo real.
Permite una mayor portabilidad de datos, una utilización optimizada de recursos y una mejor experiencia del cliente al procesar flujos de datos en tiempo real.
En nuestro taller práctico híbrido, aprenderás cómo filtrar, unir y enriquecer fácilmente datos en tiempo real dentro de Confluent Cloud utilizando nuestro servicio Flink sin servidor.
Industry 4.0: Building the Unified Namespace with Confluent, HiveMQ and Spark...confluent
Our talk will explore the transformative impact of integrating Confluent, HiveMQ, and SparkPlug in Industry 4.0, emphasizing the creation of a Unified Namespace.
In addition to the creation of a Unified Namespace, our webinar will also delve into Stream Governance and Scaling, highlighting how these aspects are crucial for managing complex data flows and ensuring robust, scalable IIoT-Platforms.
You will learn how to ensure data accuracy and reliability, expand your data processing capabilities, and optimize your data management processes.
Don't miss out on this opportunity to learn from industry experts and take your business to the next level.
La arquitectura impulsada por eventos (EDA) será el corazón del ecosistema de MAPFRE. Para seguir siendo competitivas, las empresas de hoy dependen cada vez más del análisis de datos en tiempo real, lo que les permite obtener información y tiempos de respuesta más rápidos. Los negocios con datos en tiempo real consisten en tomar conciencia de la situación, detectar y responder a lo que está sucediendo en el mundo ahora.
Eventos y Microservicios - Santander TechTalkconfluent
Durante esta sesión examinaremos cómo el mundo de los eventos y los microservicios se complementan y mejoran explorando cómo los patrones basados en eventos nos permiten descomponer monolitos de manera escalable, resiliente y desacoplada.
Purpose of the session is to have a dive into Apache, Kafka, Data Streaming and Kafka in the cloud
- Dive into Apache Kafka
- Data Streaming
- Kafka in the cloud
Build real-time streaming data pipelines to AWS with Confluentconfluent
Traditional data pipelines often face scalability issues and challenges related to cost, their monolithic design, and reliance on batch data processing. They also typically operate under the premise that all data needs to be stored in a single centralized data source before it's put to practical use. Confluent Cloud on Amazon Web Services (AWS) provides a fully managed cloud-native platform that helps you simplify the way you build real-time data flows using streaming data pipelines and Apache Kafka.
Q&A with Confluent Professional Services: Confluent Service Meshconfluent
No matter whether you are migrating your Kafka cluster to Confluent Cloud, running a cloud-hybrid environment or are in a different situation where data protection and encryption of sensitive information is required, Confluent Service Mesh allows you to transparently encrypt your data without the need to make code changes to you existing applications.
Citi Tech Talk: Event Driven Kafka Microservicesconfluent
Microservices have become a dominant architectural paradigm for building systems in the enterprise, but they are not without their tradeoffs. Learn how to build event-driven microservices with Apache Kafka
Confluent & GSI Webinars series - Session 3confluent
An in depth look at how Confluent is being used in the financial services industry. Gain an understanding of how organisations are utilising data in motion to solve common problems and gain benefits from their real time data capabilities.
It will look more deeply into some specific use cases and show how Confluent technology is used to manage costs and mitigate risks.
This session is aimed at Solutions Architects, Sales Engineers and Pre Sales, and also the more technically minded business aligned people. Whilst this is not a deeply technical session, a level of knowledge around Kafka would be helpful.
Transforming applications built with traditional messaging solutions such as TIBCO, MQ and Solace to be scalable, reliable and ready for the move to cloud
How can applications built with traditional messaging technologies like TIBCO, Solace and IBM MQ be modernised and be made cloud ready? What are the advantages to Event Streaming approaches to pub/sub vs traditional message queues? What are the strengeths and weaknesses of both approaches, and what use cases and requirements are actually a better fit for messaging than Kafka?
This session will show why the old paradigm does not work and that a new approach to the data strategy needs to be taken. It aims to show how a Data Streaming Platform is integral to the evolution of a company’s data strategy and how Confluent is not just an integration layer but the central nervous system for an organisation
Vous apprendrez également à :
• Créer plus rapidement des produits et fonctionnalités à l’aide d’une suite complète de connecteurs et d’outils de gestion des flux, et à connecter vos environnements à des pipelines de données
• Protéger vos données et charges de travail les plus critiques grâce à des garanties intégrées en matière de sécurité, de gouvernance et de résilience
• Déployer Kafka à grande échelle en quelques minutes tout en réduisant les coûts et la charge opérationnelle associés
Confluent Partner Tech Talk with Synthesisconfluent
A discussion on the arduous planning process, and deep dive into the design/architectural decisions.
Learn more about the networking, RBAC strategies, the automation, and the deployment plan.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
UiPath Test Automation using UiPath Test Suite series, part 4
Confluent & Attunity: Mainframe Data Modern Analytics
1. Leveraging mainframe data
for modern analytics
Confluent and Attunity provide seamless mainframe integration
and query offload for modern, distributed analytics environments
via Apache Kafka
Large enterprises, government agencies, and many other organizations have
long relied on mainframe computers to deliver core systems managing some of
their most valuable and sensitive data. Mainframes remain important to business
today and are likely to continue to be critical infrastructure for many organizations
for the foreseeable future. As important as these systems are, however, they
can be difficult to integrate into today’s data-driven, analytics-focused business
processes as well as the environments that support them.
Most businesses looking to integrate mainframes into a broader analytical
environment take a brute force approach, querying directly into the mainframe
system to access the data they need. This approach can be costly in environments
where billing is based on how many MIPS (millions of instructions per second)
the system uses, because each new query eats up more instructions, adding to
that month’s bill. Additionally, a significant amount of tuning and optimization is
typically required to get a data warehouse sitting on a mainframe to support the
kind of broad, deep, and fast analysis that businesses need today.
One solution for a business facing these challenges is to offload the data
from the mainframe to another environment such as Apache®
Hadoop™
or an
MPP database. Mainframe offload is an effective approach for some business
environments, but it involves an expensive and time-consuming process to
implement. And more often than not, the result of all that effort is just a new
data silo. Businesses need to produce results from multiple data types and
sources in real time. Doing so requires a new kind of solution—one that keeps
mainframe data current and available to the broader ecosystem without all the
environmental complexity and without breaking the bank.
KNOWLEDGE BRIEF / UNLOCK MAINFRAME DATA
2. Attunity unlocks mainframe data
Attunity Replicate is a software solution that provides an
efficient and cost-effective way to bring mainframe data
into a modern analytics environment with its change
data capture (CDC) technology for DB2, VSAM, and IMS.
Attunity Replicate provides low-latency and low-impact
data integration for mainframe databases. With Attunity
Replicate, you can extract and source mainframe data
efficiently in real-time, and deliver those changes to
a data warehouse or another enterprise analytics
environment. The software replicates data continuously
with low latency, providing a real-time alternative to a
“lift and shift” approach to moving data
Attunity Replicate enables change record filtering and
supports a wide variety of data sources and platforms.
It supports transactional delivery for replicating into
relational databases, batch-optimized delivery for data
warehouses, message-oriented delivery for application
and streaming based integration platforms, and change
audit trails to facilitate other forms of integration. The
software can be easily installed and configured using
an intuitive, wizard-based GUI – so no extra coding
is required.
Because it replicates data continuously, Attunity
Replicate eliminates the need to move data in periodic
batches, keeping that data in sync and providing real-
time access to the data in other platforms. Attunity
Replicate provides log-based CDC for DB2 (leveraging
the DB2 journals) as well as for VSAM and IMS, rather
than performing repeated brute-force queries into the
data. This provides a unique, non-invasive approach
to capturing changes which does not incur the hefty
price tag in MIPS described above. Attunity Replicate
enables event-driven, stream-based processing via
Apache Kafka.
Kafka brings mainframe data to life
Clients can leverage Kafka’s modern distributed
architecture to move mainframe data in real time.
Unlike using a Hadoop or database offload, which will
first store and index data so that it can be accessed by
querying, Kafka captures data for processing in flight.
Although it may look like a messaging system — with
producers publishing messages that are available
to consumers in milliseconds — Kafka works more
like a distributed database for your mainframe and
other data.
When you write a message to Kafka, it is replicated to
multiple servers and committed to disk. Kafka provides a
core commit log for the entire distributed environment,
persisting the data so that any number of systems can
consume it at any time. Kafka’s consumers perform
the database read function while its producers — in
this case, your mainframe environment — perform the
database write function. Once the data is published,
there is no need to incur the cost in time and MIPS
associated with going back to the mainframe.
Kafka’s unique architecture enables its fast performance.
A single Kafka broker can handle hundreds of
megabytes of reads and writes per second from
thousands of clients. Each broker can persist terabytes
of messages without impacting performance.
Mainframe
DataLake
EDWCDC
Bulk Load
Attunity Replicate
Database Changes
IoT Data
Log Events
Web Events
Confluent Enterprise
2
KNOWLEDGE BRIEF / UNLOCK MAINFRAME DATA