Successfully reported this slideshow.

Introduction To Streaming Data and Stream Processing with Apache Kafka

29

Share

Loading in …3
×
1 of 77
1 of 77

Introduction To Streaming Data and Stream Processing with Apache Kafka

29

Share

Description

Modern businesses have data at their core, and this data is changing continuously. How can we harness this torrent of continuously changing data in real time? The answer is stream processing, and one system that has become a core hub for streaming data is Apache Kafka.

This presentation will give a brief introduction to Apache Kafka and describe its usage as a platform for streaming data. It will explain how Kafka serves as a foundation for both streaming data pipelines and applications that consume and process real-time data streams. It will introduce some of the newer components of Kafka that help make this possible, including Kafka Connect, a framework for capturing continuous data streams, and Kafka Streams, a lightweight stream processing library.
This is talk 1 out of 6 from the Kafka Talk Series.
http://www.confluent.io/apache-kafka-talk-series/introduction-to-stream-processing-with-apache-kafka

Transcript

  1. 1. • Everything in the company is a real-time stream • > 1.2 trillion messages written per day • > 3.4 trillion messages read per day • ~ 1 PB of stream data • Thousands of engineers • Tens of thousands of producer processes • Used as commit log for distributed database
  2. 2. Coming Up Next Date Title Speaker 10/6 Deep Dive Into Apache Kafka Jun Rao 10/27 Data Integration with Kafka Gwen Shapira 11/17 Demystifying Stream Processing Neha Narkhede 12/1 A Practical Guide To Selecting A Stream Processing Technology Michael Noll 12/15 Streaming in Practice: Putting Apache Kafka in Production Roger Hoover

Editor's Notes

  • Hi, I’m Jay Kreps, I’m one of the creators of Apache Kafka and also one of the co-founders of Confluent, the company driving Kafka development as well as developing Confluent Platform, the leading Kafka distribution.
    Welcome to our Apache Kafka Online Talk Series.
    This first talk is going to introduce Kafka and the problems it was built to solve. This is a series of talks meant to help introduce you to the world of Apache Kafka and stream processing. Along the way I’ll give pointers to areas we are going to dive into into more depth in upcoming talks.

  • Rather than starting off by diving into a bunch of Kafka features let me instead introduce the problem area. So what is the problem we have today that needs a new thing?

    To show that let me start but just laying out the architecture for most companies.
  • Most applilcations are request/response (client/server)
    HTTP services
    OLTP databases
    Key/value stores
    You send a request they send back a response. These do little bits of work quickly. UI rendering is inherently this way: client sends a request to fetch the data to display the UI.
    Inherently synchronous—can’t display the UI until you get back the response with the data.
  • The second big area is batch processing.
    This is the domain of the datawarehouse and hadoop clusters.
    Cron jobs.
    These are usually once a day things, though you can potentially run them a little quicker.
    So this the architecture we have today? What are the problems?
  • How does data get around?
  • Database data, log data
    Lots of systems—databases, specialized system like search, caches
    Business units
    N^2 connections
    Tons of glue code to stitch it all together
  • Request/response is inherently synchronous.
    Hard to scale.
  • Either big apps with huge amounts of work per request, or lots of little microservices…still all that work is synchronous.
    Has to be synchronous---say you make an HTTP request but don’t wait for the response, then you don’t know if it actually happened or not.
  • Example: retail
    Sales are synchronous—you give me money and I give you a product (or commit to ship you a product) and give you a receipt or confirmation number.
    But a lot of the backend isn’t synchronous—I need to process shipments of new products, adjust prices, do inventory adjustments, re-order products, do things like analytics.
    Most of these don’t make sense to do in the process of a single sale—they are asynchronous. If something gets borked in my inventory reordering process I don’t want to block sales.
  • These are the two problems that data streams can solve:
    Data pipeline sprawl
    Asynchronous services
  • This is what that architecture looks like relying on streaming.
    Data pipelines go to the streaming platform, no longer N^2 separate pipelines.
    Async apps can feed off of this as well.
    Obviously that streaming box is going to be filled by Kafka.
    Now let’s dive into these two areas.
  • Companies are real-time not batch
  • Event = something that happened
    Record
    A product was viewed, a sale occurred, a database was updated, etc
    It’s a piece of data, a fact. But can also be a trigger or command (a sale occurred, so now let’s reorder).
    Not specific to a particular system or service, just a fact.
    Let’s look a few concrete examples to get a feel for it, first some simple ones then something a bit more complex.
  • Event is “a web page was viewed” or “an error occurred” or whatever you’re logging.
    In fact the “log file” is totally incidental to the data being recorded—these data in the log is clearly a sequence of events.
  • Sensors can also be represented as event streams. The event is something like “the value of this sensor is X”
    This covers a lot of instrumentation of the world, IOT use cases, logistics and vehicle positions, or even taking readings of metrics from monitoring counters or gauges in your apps. All these sensors can be captured into a stream of events.
    Okay, those were the easy and obvious ones, now let’s look at something more surprising.
  • Databases can be thought of as streams of events!
    This isn’t obvious, but it’s really important because most valuable data is stored in databases.
    What do I mean that you can think of a database as a stream of events?
    Well what’s the most common data representation in a database?
    Table/Stream duality.
  • It’s a table.
    A table looks something like this, a rectangle with columns, right?
    In my simplified table I am just going to have two columns a primary key and a value…both of these could be made up of multiple columns in real life.
    But in reality this representation of a table is a little bit over simplified because tables are always being updated (that is the whole point of database, after all). But this table is just static. How can I represent a table that is getting updated like our sensors or log files are?
  • Well the easy way to do it would be just dump out a full copy of the table periodically. In this picture I’ve represented a sequence of snapshots of the table as time goes by.
  • Now it’s a bit inefficient to take a full dump of the table over and over, right? Probably if your tables are like mine, not all your rows are getting updated all the time. An alternative that might be a bit more efficent would be to just dump out the rows that changed. This would give me a sequence of “diffs”. Now imagine I increase the frequency of this process to make the diff as small as possible. Clearly the smallest possible diff would be a single changed row.
    Here I’ve listed the sequence of single changed rows, each represented by a single PUT operation (an update or insert).
    Now the key thing is that if I have this sequence of changes it actually represents all the states of my table.
    And, of course, that sequence of updates is a stream of events. The event is something like “the value of this primary key is now X”.
  • Now I can represent all these different data pipelines as event streams.
    I can capture changes from a data system or application, and take that stream and feed it into another system.
  • That is going to be the key to solving my pipeline sprawl problem.
    Instead of having N^2 different pipelines, one for each pair of systems I am going to have a central place that hosts all these event streams—the streaming platform.
    This is a central way that all these systems and applications can plug in to get the streams they need.
    So I can capture streams from databases, and feed them into DWH, Hadoop, monitoring and analytics systems.
    They key advantage is that there is a single integration point for each thing that wants data.
    Now obviously to make this work I’m going to need to ensure I have met the reliability, scalability, and latency guarantees for each of these systems.
  • Let’s dive into an example to see the example of this model of data.
    Let’s say that we have a web app that is recording events about a product being viewed. And let’s say we are using Hadoop for analytics and want to get this data there.
    In this model the web app publishes its stream of clicks to our streaming platform and Hadoop loads these. With only two systems, the only real advantage is some decoupling—the web app isn’t tied to the particular technology we are using for analytics, and the Hadoop cluster doesn’t need to be up all the time.
    But the advantage is that additional uses of this data become really easy.
  • For example if other apps can also generate product view events, they just publish these, Hadoop doesn’t need to know there are more publishers of this type of event.
  • And if additional use cases arise they can be added a well. In this example there turn out to be a number of other uses for product views—analytics, recommendations, security monitoring, etc. These can all just subscribe without any need to go back and modify any of the apps that generate product views.
  • Okay so we talked about how streams can be used for solving the data pipeline sprawl problem. Now let’s talk about the solution to the second problem---too much synchrony.
    This comes from being able to process real-time streams of data and this is called stream processing.
    So what is stream processing?
  • Best way to think about it is as a third paradigm for programming. We talked about request/response and batch processing. Let’s dive into these a bit and use them to motivate stream processing.
  • HTTP/REST
    All databases
    Run all the time
    Each request totally independent—No real ordering
    Can fail individual requests if you want
    Very simple!
    About the future!
  • “Ed, the MapReduce job never finishes if you watch it like that”
    Job kicks off at a certain time
    Cron!
    Processes all the input, produces all the input
    Data is usually static
    Hadoop!
    DWH, JCL
    Archaic but powerful. Can do analytics! Compex algorithms!
    Also can be really efficient!
    Inherently high latency
  • Generalizes request/response and batch.
    Program takes some inputs and produces some outputs
    Could be all inputs
    Could be one at a time
    Runs continuously forever!
  • Basically a service that processes, reacts to, or transforms streams of events.
    Asynchronous so it allows us to decouple work from our request/response services.
  • Many of things are naturally thought of as stream processing.
    Walmart blog
  • Now we’ve talked about these two motivations for streams---solving pipline spawl and asynchronous stream processing.
    It won’t surprise anyone that when I talk about this streaming platform that enables these pipelines and processing I am talking about Apache Kafka.
  • So what is Kafka?
    It’s a streaming platform.
    Lets you publish and subscribe to streams of data, stores them realiably, and lets you process them in real time.
    The second half of this talk with dive into Apache Kafka and talk about it acts as streaming platform and let’s you build real-time streaming pipelines and do stream processing.
  • It’s widely used and in production at thousands of companies.
    Let’s walk through the the basics of Kafka and understand how it acts as a streaming platform.
  • Events = Record = Message
    Timestamp, an optional key and a value
    Key is used for partitioning. Timestamp is used for retention and processing.
  • Not an apache log
    Different: Commit log
    Stolen from distributed database internals
    Key abstraction for systems, real-time processing, data integration
    Formalization of a stream
    Reader controls progress—unifies batch and real-time
  • Relate to pub/sub
  • World is a process/threads (total order) but no order between
  • Four APIs to read and write streams of events
    First two are easy, the producer and consumer allow applications to read and write to Kafka.
    The connect API allows building connectors that integrate Kafka with existing systems or applications.
    The streams api allows stream processing on top of Kafka.
    We’ll go through each of these briefly.
  • The producer writes (publishes) streams of events to Kafka to be stored.
  • Consumer reads (subscribes) to streams of events from topics.
  • Kafka topics are always multi-reader and can be scaled out. So in this example I have two logical consumers: A and B. Each of these logical consumers is made up of multiple physical processes, potentially running on different machines. Two processes for A and three for B.
    These groups are dynamic: processes can join a group or leave a group at any time and Kafka will balance the load over the new set of processes.
  • So for example if one of the B processes dies, the data being consumed by that process will be transitioned to the remaining B processes automatically.
    These groups are a fundamental abstraction in Kafka and they support not only groups of consumers, but also groups of connectors or stream processors.
  • In our streaming platform vision we had a number of apps or data systems that were integrated with Kafka. Either they are loading streams of data out of Kafka or publishing streams of data into Kafka.
    If these systems are built to directly integrate with Kafka they could use the producer and consumer API. But many apps and databases simple have read and write apis, they don’t know anything about Kafka. How can we make integration with this kind of existing app or system easy? After all these systems don’t know that they need to push data into kafka or pull data out?
    The answer is the Connect APIs
  • These APIs allow writing reusable connectors to Kafka.
    A source is a connector that reads data out of the external system and publishes to Kafka.
    A sink is a connector that pulls data out of Kafka and writes it to the external system.
    Of course you could build this integration using the producer and consumer apis, so how is this better?
  • REST Apis for management
    A few examples help illustrate this
  • We’ll dive into Kafka connect in more detail in the third installment of this talk series which goes far deeper into the practice of building streaming pipelines with Kafka.
  • The final API for Kafka is the streams api.
    This api lets you build real time stream processing on top of Kafka.
    These stream processors take input from kafka topics and either react to the input or transform it into output to output topics.
  • So in effect a stream processing app is basically just some code that consumes input and produces output.
    So why not just use the producer and consumer APIs?
    Well, it turns out there are some hard parts to doing real-time stream processing.
  • Add screenshot example
  • Add screenshot example
  • Companies == streams
    What a retail store do
    Streams
    Retail
    - Sales
    - Shipments and logistics
    - Pricing
    - Re-ordering
    - Analytics
    - Fraud and theft
  • Table/Stream duality
  • Othing you might be thinking is that this streaming vision isn’t really different from existing technology like Enterprise Messaging Systems or Enterprise Service Buses?
  • So I thought it might be worth giving a quick cliff notes on how Kafka and modern stream processing technologies compare to previous generations of systems. For those really interested in this question we’re putting together a white paper that gives a much more detailed answer. But for those who just want the cliff notes I think there are three key differences.
  • The richness of the stream processing capabilities is a major advance over the previous generations of technoglogy

    The other two difference really come from Kafka being a modern distributed system
    --it scales horizontally on commodity machines
    --and it gives strong guarantees for data

    Let’s dive into these two a little bit.
  • So we’ve talked about the APIs and abstractions, in the next few slides I’ll give a preview of Kafka as a data system—the guaranatees and capabilities it has. Jun, my co-founder, will be doing a much deeper dive in this area in the next talk in this series, so if you want to learn more about how kafka works that is the thing to see. But I’ll give a quick walk through of what Kafka provides. Each of these characteristics is really essential to it’s usage as a “unniversal data pipeline” and processing technology.
  • First it scales well and cheaply.
    You can do hundreds of MB/sec of writes per server and can have many servers
    Kafka doesn’t get slower as you store more data in it
    In this respect it performs a lot like a distribute file system
    This is very different from existing messaging systems
    Without this a lot of the “big data” workloads that kafka gets used for, which often have very high volume data streams, would not be possible or feasible.
    This scalability is also really important for centralizing a lot of data streams in the same place—if that didn’t scale well it just wouldn’t be practical.
  • Next Kafka provides strong guarantees for data written to the cluster. Writes are replicated across multiple machines for fault tolerance, and we acknowledge the write back to the client.
    All data is persisted to the filesystem.
    And writes to the kafka cluster are strong ordered.
    This is another difference from a traditional messaging system—they usually do a poor job of supporting strong ordering of updates with more than a single consumer.
  • Works as a cluster
    Can replace machines without bringing down the cluster
    Failures are handled transparently
    Data not lost if a machine destroyed
    Can scale elastically as usage grows.
  • Description

    Modern businesses have data at their core, and this data is changing continuously. How can we harness this torrent of continuously changing data in real time? The answer is stream processing, and one system that has become a core hub for streaming data is Apache Kafka.

    This presentation will give a brief introduction to Apache Kafka and describe its usage as a platform for streaming data. It will explain how Kafka serves as a foundation for both streaming data pipelines and applications that consume and process real-time data streams. It will introduce some of the newer components of Kafka that help make this possible, including Kafka Connect, a framework for capturing continuous data streams, and Kafka Streams, a lightweight stream processing library.
    This is talk 1 out of 6 from the Kafka Talk Series.
    http://www.confluent.io/apache-kafka-talk-series/introduction-to-stream-processing-with-apache-kafka

    Transcript

    1. 1. • Everything in the company is a real-time stream • > 1.2 trillion messages written per day • > 3.4 trillion messages read per day • ~ 1 PB of stream data • Thousands of engineers • Tens of thousands of producer processes • Used as commit log for distributed database
    2. 2. Coming Up Next Date Title Speaker 10/6 Deep Dive Into Apache Kafka Jun Rao 10/27 Data Integration with Kafka Gwen Shapira 11/17 Demystifying Stream Processing Neha Narkhede 12/1 A Practical Guide To Selecting A Stream Processing Technology Michael Noll 12/15 Streaming in Practice: Putting Apache Kafka in Production Roger Hoover

    Editor's Notes

  • Hi, I’m Jay Kreps, I’m one of the creators of Apache Kafka and also one of the co-founders of Confluent, the company driving Kafka development as well as developing Confluent Platform, the leading Kafka distribution.
    Welcome to our Apache Kafka Online Talk Series.
    This first talk is going to introduce Kafka and the problems it was built to solve. This is a series of talks meant to help introduce you to the world of Apache Kafka and stream processing. Along the way I’ll give pointers to areas we are going to dive into into more depth in upcoming talks.

  • Rather than starting off by diving into a bunch of Kafka features let me instead introduce the problem area. So what is the problem we have today that needs a new thing?

    To show that let me start but just laying out the architecture for most companies.
  • Most applilcations are request/response (client/server)
    HTTP services
    OLTP databases
    Key/value stores
    You send a request they send back a response. These do little bits of work quickly. UI rendering is inherently this way: client sends a request to fetch the data to display the UI.
    Inherently synchronous—can’t display the UI until you get back the response with the data.
  • The second big area is batch processing.
    This is the domain of the datawarehouse and hadoop clusters.
    Cron jobs.
    These are usually once a day things, though you can potentially run them a little quicker.
    So this the architecture we have today? What are the problems?
  • How does data get around?
  • Database data, log data
    Lots of systems—databases, specialized system like search, caches
    Business units
    N^2 connections
    Tons of glue code to stitch it all together
  • Request/response is inherently synchronous.
    Hard to scale.
  • Either big apps with huge amounts of work per request, or lots of little microservices…still all that work is synchronous.
    Has to be synchronous---say you make an HTTP request but don’t wait for the response, then you don’t know if it actually happened or not.
  • Example: retail
    Sales are synchronous—you give me money and I give you a product (or commit to ship you a product) and give you a receipt or confirmation number.
    But a lot of the backend isn’t synchronous—I need to process shipments of new products, adjust prices, do inventory adjustments, re-order products, do things like analytics.
    Most of these don’t make sense to do in the process of a single sale—they are asynchronous. If something gets borked in my inventory reordering process I don’t want to block sales.
  • These are the two problems that data streams can solve:
    Data pipeline sprawl
    Asynchronous services
  • This is what that architecture looks like relying on streaming.
    Data pipelines go to the streaming platform, no longer N^2 separate pipelines.
    Async apps can feed off of this as well.
    Obviously that streaming box is going to be filled by Kafka.
    Now let’s dive into these two areas.
  • Companies are real-time not batch
  • Event = something that happened
    Record
    A product was viewed, a sale occurred, a database was updated, etc
    It’s a piece of data, a fact. But can also be a trigger or command (a sale occurred, so now let’s reorder).
    Not specific to a particular system or service, just a fact.
    Let’s look a few concrete examples to get a feel for it, first some simple ones then something a bit more complex.
  • Event is “a web page was viewed” or “an error occurred” or whatever you’re logging.
    In fact the “log file” is totally incidental to the data being recorded—these data in the log is clearly a sequence of events.
  • Sensors can also be represented as event streams. The event is something like “the value of this sensor is X”
    This covers a lot of instrumentation of the world, IOT use cases, logistics and vehicle positions, or even taking readings of metrics from monitoring counters or gauges in your apps. All these sensors can be captured into a stream of events.
    Okay, those were the easy and obvious ones, now let’s look at something more surprising.
  • Databases can be thought of as streams of events!
    This isn’t obvious, but it’s really important because most valuable data is stored in databases.
    What do I mean that you can think of a database as a stream of events?
    Well what’s the most common data representation in a database?
    Table/Stream duality.
  • It’s a table.
    A table looks something like this, a rectangle with columns, right?
    In my simplified table I am just going to have two columns a primary key and a value…both of these could be made up of multiple columns in real life.
    But in reality this representation of a table is a little bit over simplified because tables are always being updated (that is the whole point of database, after all). But this table is just static. How can I represent a table that is getting updated like our sensors or log files are?
  • Well the easy way to do it would be just dump out a full copy of the table periodically. In this picture I’ve represented a sequence of snapshots of the table as time goes by.
  • Now it’s a bit inefficient to take a full dump of the table over and over, right? Probably if your tables are like mine, not all your rows are getting updated all the time. An alternative that might be a bit more efficent would be to just dump out the rows that changed. This would give me a sequence of “diffs”. Now imagine I increase the frequency of this process to make the diff as small as possible. Clearly the smallest possible diff would be a single changed row.
    Here I’ve listed the sequence of single changed rows, each represented by a single PUT operation (an update or insert).
    Now the key thing is that if I have this sequence of changes it actually represents all the states of my table.
    And, of course, that sequence of updates is a stream of events. The event is something like “the value of this primary key is now X”.
  • Now I can represent all these different data pipelines as event streams.
    I can capture changes from a data system or application, and take that stream and feed it into another system.
  • That is going to be the key to solving my pipeline sprawl problem.
    Instead of having N^2 different pipelines, one for each pair of systems I am going to have a central place that hosts all these event streams—the streaming platform.
    This is a central way that all these systems and applications can plug in to get the streams they need.
    So I can capture streams from databases, and feed them into DWH, Hadoop, monitoring and analytics systems.
    They key advantage is that there is a single integration point for each thing that wants data.
    Now obviously to make this work I’m going to need to ensure I have met the reliability, scalability, and latency guarantees for each of these systems.
  • Let’s dive into an example to see the example of this model of data.
    Let’s say that we have a web app that is recording events about a product being viewed. And let’s say we are using Hadoop for analytics and want to get this data there.
    In this model the web app publishes its stream of clicks to our streaming platform and Hadoop loads these. With only two systems, the only real advantage is some decoupling—the web app isn’t tied to the particular technology we are using for analytics, and the Hadoop cluster doesn’t need to be up all the time.
    But the advantage is that additional uses of this data become really easy.
  • For example if other apps can also generate product view events, they just publish these, Hadoop doesn’t need to know there are more publishers of this type of event.
  • And if additional use cases arise they can be added a well. In this example there turn out to be a number of other uses for product views—analytics, recommendations, security monitoring, etc. These can all just subscribe without any need to go back and modify any of the apps that generate product views.
  • Okay so we talked about how streams can be used for solving the data pipeline sprawl problem. Now let’s talk about the solution to the second problem---too much synchrony.
    This comes from being able to process real-time streams of data and this is called stream processing.
    So what is stream processing?
  • Best way to think about it is as a third paradigm for programming. We talked about request/response and batch processing. Let’s dive into these a bit and use them to motivate stream processing.
  • HTTP/REST
    All databases
    Run all the time
    Each request totally independent—No real ordering
    Can fail individual requests if you want
    Very simple!
    About the future!
  • “Ed, the MapReduce job never finishes if you watch it like that”
    Job kicks off at a certain time
    Cron!
    Processes all the input, produces all the input
    Data is usually static
    Hadoop!
    DWH, JCL
    Archaic but powerful. Can do analytics! Compex algorithms!
    Also can be really efficient!
    Inherently high latency
  • Generalizes request/response and batch.
    Program takes some inputs and produces some outputs
    Could be all inputs
    Could be one at a time
    Runs continuously forever!
  • Basically a service that processes, reacts to, or transforms streams of events.
    Asynchronous so it allows us to decouple work from our request/response services.
  • Many of things are naturally thought of as stream processing.
    Walmart blog
  • Now we’ve talked about these two motivations for streams---solving pipline spawl and asynchronous stream processing.
    It won’t surprise anyone that when I talk about this streaming platform that enables these pipelines and processing I am talking about Apache Kafka.
  • So what is Kafka?
    It’s a streaming platform.
    Lets you publish and subscribe to streams of data, stores them realiably, and lets you process them in real time.
    The second half of this talk with dive into Apache Kafka and talk about it acts as streaming platform and let’s you build real-time streaming pipelines and do stream processing.
  • It’s widely used and in production at thousands of companies.
    Let’s walk through the the basics of Kafka and understand how it acts as a streaming platform.
  • Events = Record = Message
    Timestamp, an optional key and a value
    Key is used for partitioning. Timestamp is used for retention and processing.
  • Not an apache log
    Different: Commit log
    Stolen from distributed database internals
    Key abstraction for systems, real-time processing, data integration
    Formalization of a stream
    Reader controls progress—unifies batch and real-time
  • Relate to pub/sub
  • World is a process/threads (total order) but no order between
  • Four APIs to read and write streams of events
    First two are easy, the producer and consumer allow applications to read and write to Kafka.
    The connect API allows building connectors that integrate Kafka with existing systems or applications.
    The streams api allows stream processing on top of Kafka.
    We’ll go through each of these briefly.
  • The producer writes (publishes) streams of events to Kafka to be stored.
  • Consumer reads (subscribes) to streams of events from topics.
  • Kafka topics are always multi-reader and can be scaled out. So in this example I have two logical consumers: A and B. Each of these logical consumers is made up of multiple physical processes, potentially running on different machines. Two processes for A and three for B.
    These groups are dynamic: processes can join a group or leave a group at any time and Kafka will balance the load over the new set of processes.
  • So for example if one of the B processes dies, the data being consumed by that process will be transitioned to the remaining B processes automatically.
    These groups are a fundamental abstraction in Kafka and they support not only groups of consumers, but also groups of connectors or stream processors.
  • In our streaming platform vision we had a number of apps or data systems that were integrated with Kafka. Either they are loading streams of data out of Kafka or publishing streams of data into Kafka.
    If these systems are built to directly integrate with Kafka they could use the producer and consumer API. But many apps and databases simple have read and write apis, they don’t know anything about Kafka. How can we make integration with this kind of existing app or system easy? After all these systems don’t know that they need to push data into kafka or pull data out?
    The answer is the Connect APIs
  • These APIs allow writing reusable connectors to Kafka.
    A source is a connector that reads data out of the external system and publishes to Kafka.
    A sink is a connector that pulls data out of Kafka and writes it to the external system.
    Of course you could build this integration using the producer and consumer apis, so how is this better?
  • REST Apis for management
    A few examples help illustrate this
  • We’ll dive into Kafka connect in more detail in the third installment of this talk series which goes far deeper into the practice of building streaming pipelines with Kafka.
  • The final API for Kafka is the streams api.
    This api lets you build real time stream processing on top of Kafka.
    These stream processors take input from kafka topics and either react to the input or transform it into output to output topics.
  • So in effect a stream processing app is basically just some code that consumes input and produces output.
    So why not just use the producer and consumer APIs?
    Well, it turns out there are some hard parts to doing real-time stream processing.
  • Add screenshot example
  • Add screenshot example
  • Companies == streams
    What a retail store do
    Streams
    Retail
    - Sales
    - Shipments and logistics
    - Pricing
    - Re-ordering
    - Analytics
    - Fraud and theft
  • Table/Stream duality
  • Othing you might be thinking is that this streaming vision isn’t really different from existing technology like Enterprise Messaging Systems or Enterprise Service Buses?
  • So I thought it might be worth giving a quick cliff notes on how Kafka and modern stream processing technologies compare to previous generations of systems. For those really interested in this question we’re putting together a white paper that gives a much more detailed answer. But for those who just want the cliff notes I think there are three key differences.
  • The richness of the stream processing capabilities is a major advance over the previous generations of technoglogy

    The other two difference really come from Kafka being a modern distributed system
    --it scales horizontally on commodity machines
    --and it gives strong guarantees for data

    Let’s dive into these two a little bit.
  • So we’ve talked about the APIs and abstractions, in the next few slides I’ll give a preview of Kafka as a data system—the guaranatees and capabilities it has. Jun, my co-founder, will be doing a much deeper dive in this area in the next talk in this series, so if you want to learn more about how kafka works that is the thing to see. But I’ll give a quick walk through of what Kafka provides. Each of these characteristics is really essential to it’s usage as a “unniversal data pipeline” and processing technology.
  • First it scales well and cheaply.
    You can do hundreds of MB/sec of writes per server and can have many servers
    Kafka doesn’t get slower as you store more data in it
    In this respect it performs a lot like a distribute file system
    This is very different from existing messaging systems
    Without this a lot of the “big data” workloads that kafka gets used for, which often have very high volume data streams, would not be possible or feasible.
    This scalability is also really important for centralizing a lot of data streams in the same place—if that didn’t scale well it just wouldn’t be practical.
  • Next Kafka provides strong guarantees for data written to the cluster. Writes are replicated across multiple machines for fault tolerance, and we acknowledge the write back to the client.
    All data is persisted to the filesystem.
    And writes to the kafka cluster are strong ordered.
    This is another difference from a traditional messaging system—they usually do a poor job of supporting strong ordering of updates with more than a single consumer.
  • Works as a cluster
    Can replace machines without bringing down the cluster
    Failures are handled transparently
    Data not lost if a machine destroyed
    Can scale elastically as usage grows.
  • More Related Content

    Related Books

    Free with a 30 day trial from Scribd

    See all

    Related Audiobooks

    Free with a 30 day trial from Scribd

    See all

    ×