The hardest part of microservices: your data

3,487 views

Published on

Microservices architecture is a very powerful way to build scalable systems optimized for speed of change. To do this, we need to build independent, autonomous services which by definition tend to minimize dependencies on other systems. One of the tenants of microservices, and a way to minimize dependencies, is “a service should own its own database”. Unfortunately this is a lot easier said than done. Why? Because: your data.

We’ve been dealing with data in information systems for 5 decades so isn’t this a solved problem? Yes and no. A lot of the lessons learned are still very relevant. Traditionally, we application developers have accepted the practice of using relational databases and relying on all of their safety guarantees without question. But as we build services architectures that span more than one database (by design, as with microservices), things get harder. If data about a customer changes in one database, how do we reconcile that with other databases (especially where the data storage may be heterogenous?).

For developers focused on the traditional enterprise, not only do we have to try to build fast-changing systems that are surrounded by legacy systems, the domains (finance, insurance, retail, etc) are incredibly complicated. Just copying with Netflix does for microservices may or may not be useful. So how do we develop and reason about the boundaries in our system to reduce complexity in the domain?

In this talk, we’ll explore these problems and see how Domain Driven Design helps grapple with the domain complexity. We’ll see how DDD concepts like Entities and Aggregates help reason about boundaries based on use cases and how transactions are affected. Once we can identify our transactional boundaries we can more carefully adjust our needs from the CAP theorem to scale out and achieve truly autonomous systems with strictly ordered eventual consistency. We’ll see how technologies like Apache Kafka, Apache Camel and Debezium.io can help build the backbone for these types of systems. We’ll even explore the details of a working example that brings all of this together.

Published in: Software
0 Comments
16 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
3,487
On SlideShare
0
From Embeds
0
Number of Embeds
60
Actions
Shares
0
Downloads
65
Comments
0
Likes
16
Embeds 0
No embeds

No notes for slide
  • Speed!!!.... As in performance? Or scale? What is this speed thing all about?

    This is a very different way of thinking about IT.

    Typically IT is optimized for Cost. Many parts of the business are.

    We’re not product companies anymore….

    IT was traditionally used to transform otherwise paper processes or manual processes. And to support things like CRM, Accounting, Procurement, etc. Internally supporting.

    But now companies are using IT to deliver value through services. In fact, startups, are finding out to deliver value through digital channels and are quickly disrupting old guard enterprise corporations.

    We are service companies.

    Services require bi-direction/omni-directional interactions, communication with our customers. Creating value is done with customers.

    The faster you can get things to market the faster you can see what works and what doesn’t. We don’t know what will work up front. We don’t know what will deliver business value up front. We need to discover it.

    What we want is to build an organization that’s able to experiment, fail fast, and iterate on what does work. We basically want IT to drive outcomes that deliver business value.

    And we want to go fast.

  • The discovery of what’s important, and the experimentation process leads us to want to find business value. We want to quickly find out the things that don’t work and minimize the cost it takes to do these experiments. This transformation is a process, not something that happens over night, and not something you can copy. You’ll even note that each organization is different in how it can go about this process; each needs to balance speed, safety, and business value for itself.
  • Get back to first principles.

    Focus on principles, patterns, methodologies.

    Tools will help, but you cannot start with tools.
  • Autonomy….
  • What is it? Who defines it? Who owns that definition? Who owns the instances of it. How do I get it? How do I not miss something? And if I do solve these questions what does the architecture look like? Bunch of point to point connections? Lots of big up front design? Lots of contracts and governance? These things tend to break autonomy. Let’s explore this a bit and see what problems we run into with data in a microservices world.
  • Now, understanding the domain, understanding the data model, an understanding where the boundaries is complex stuff. It cannot be solved with technology alone.

    Let me give you a simple example….

    This seems like a simple, even absurd question. It’s really not. This one simple question can illustrate how ambiguous contradictory our language is with respect to understanding “real life”
    We cannot understand how to store a representation of a perceived “real life” unless we can describe it in plain language without ambiguity.

    What is a book and how do we represent it? We need to first understand what “one book is.”

    If an author has written two books we may expect to see two “book” entries represented in some kind of editor database or bibliographic database as “two records”. If they’ve written two editions of the one book, does it appear multiple times? Or do we model that as a revision record? Or maybe each edition gets its own record?

    If a library or bookstore has 5 copies each of two books, do they record that as books? Is a book really just a copy of a book? Or do we call it a copy? But maybe a library inventory system may just refer to copies as books as well for the purposes of counting the total number of physical books. So we could come up with “copies” and “books” but they can be used interchangeably depending on who’s asking.

    Or maybe for some systems a Book is something with a hard cover because they want to exclude periodicals, magazines, ebooks? So a “manual” may be classified as a book in some contexts, but not others.

    Or maybe a book is just a bounded physical unit? But some novels are so long they are actually broken down into two physical elements. Maybe labeled Volume I and Volume II. So then are those separate books or one Book? Or the opposite; maybe multiple novel compositions are bounded together into a single physical unit; but really they are individual works.

    So we could have a system where the author has written one book, it’s broken into two phsycial volumes, also known as books, and each volume has 5 copies each for a total of ten books. So what is one book?

    It gets incredibly confusing.

    So now just try to wrap your head around a Customer, or Patient, or Account, etc. The same polysemes exist there, only far more convoluted and ambiguous. And now when we talk about microservices we talk about the big ball of mud and how we cannot change part of it without re-deploying others. That is the easy part. Reconciling all of the different implicit usages of domain models across multiple contexts slammed into a single application is the hard part.
  • A is the book checkout system -- book is a physical copy (second edition, volume I, II, etc all individual books)

    B is the book search system – book can be individual works where a composition may be multiple books and volumes I and II are all the same book

    C is the checkout reporting engine – a book is what A thinks is a book

    D is a recommendation engine – a book isn’t even a book, it’s an “interest” which has a mapping between books


    Book recommendations (D)


    D also wants to consume messages from A.
    But the things we need to do in our service is sufficiently different that we want to change the language.

    A and B have a translation and are coordinating,
    D is not coordinating and will build an AC that will do the translation. And it’s nobody else’s business how this works.

    In this case, maybe we have a Book recommendation engine that also reads what gets checked out and whom. Maybe D has some more complicated models it uses for describing recommendations. It wants to use A’s data, but it doesn’t want to conform to A’s domain model. It builds an Anti-Corruption layer to keep its model pure and that can do the translation between its and A’s models.
  • An order.
    A customer.
    An account.
    A return.
    A claim.
    A discount?
  • This concept of defining language, developing models to describe a domain, implementing those models, enforcing assertions, etc all happen within a certain context, and that context is vitally important in software. In common language, we are smart enough to resolve these types of language conflicts within a sentence because of its context. The computer doesn’t have this context. We have to make it explicit. And any context needs to have explicit boundaries.

    This model needs to be “useful” ie, it should be able to be implemented. Try to establish a model that’s both useful for discussion with the domain experts and is implementable. There are infinite ways to model/think about something. Balance both masters with the model you choose.

    Large complex domains may need multiple models. And really the only way to understand a language and model is within a certain context. That context should have boundaries so it doesn’t bleed or force others to bleed definitions and semantics.


    Bounded context: within this space, this is the context of the language. This is what it means and it’s not ambiguous.
    Central thing about a model is the language you create to express the prblem and solution very crisply. Need clear language and need boundaries.

    Anti corruption layers are translations between the different models that may exist in multiple bounded contexts. They keep an internal model consistent and pure without bleeding across the boundaries.

    Bounded contexts tend to be “self contained systems” themselves with a complete vertical stack of the software including UI, business logic, data models, and database. They tend to not share databases across multiple models.
  • We store our data inside this thing…
  • We store our data inside this thing…

    Do we really, as developers, understand how to properly use this thing?
  • Put it all into one big database…

    No, seriously.. Just do this for your applications. You’ll save yourself a lot of trouble.

    Focus on
  • Traditional Databases have tremendous flexibiluty, safety, etc.
  • Traditional Databases have tremendous flexibiluty, safety, etc.
  • Traditional Databases have tremendous flexibiluty, safety, etc.
  • This concept of defining language, developing models to describe a domain, implementing those models, enforcing assertions, etc all happen within a certain context, and that context is vitally important in software. In common language, we are smart enough to resolve these types of language conflicts within a sentence because of its context. The computer doesn’t have this context. We have to make it explicit. And any context needs to have explicit boundaries.

    This model needs to be “useful” ie, it should be able to be implemented. Try to establish a model that’s both useful for discussion with the domain experts and is implementable. There are infinite ways to model/think about something. Balance both masters with the model you choose.

    Large complex domains may need multiple models. And really the only way to understand a language and model is within a certain context. That context should have boundaries so it doesn’t bleed or force others to bleed definitions and semantics.


    Bounded context: within this space, this is the context of the language. This is what it means and it’s not ambiguous.
    Central thing about a model is the language you create to express the prblem and solution very crisply. Need clear language and need boundaries.

    Anti corruption layers are translations between the different models that may exist in multiple bounded contexts. They keep an internal model consistent and pure without bleeding across the boundaries.

    Bounded contexts tend to be “self contained systems” themselves with a complete vertical stack of the software including UI, business logic, data models, and database. They tend to not share databases across multiple models.
  • This concept of defining language, developing models to describe a domain, implementing those models, enforcing assertions, etc all happen within a certain context, and that context is vitally important in software. In common language, we are smart enough to resolve these types of language conflicts within a sentence because of its context. The computer doesn’t have this context. We have to make it explicit. And any context needs to have explicit boundaries.

    This model needs to be “useful” ie, it should be able to be implemented. Try to establish a model that’s both useful for discussion with the domain experts and is implementable. There are infinite ways to model/think about something. Balance both masters with the model you choose.

    Large complex domains may need multiple models. And really the only way to understand a language and model is within a certain context. That context should have boundaries so it doesn’t bleed or force others to bleed definitions and semantics.


    Bounded context: within this space, this is the context of the language. This is what it means and it’s not ambiguous.
    Central thing about a model is the language you create to express the prblem and solution very crisply. Need clear language and need boundaries.

    Anti corruption layers are translations between the different models that may exist in multiple bounded contexts. They keep an internal model consistent and pure without bleeding across the boundaries.

    Bounded contexts tend to be “self contained systems” themselves with a complete vertical stack of the software including UI, business logic, data models, and database. They tend to not share databases across multiple models.
  • Business Agility!!!

    Journey …

    Understand them

    Test them

    Change them

    Different pace – rate of change is key !!!

    Agile business…




    Before going to far, we should have a definition about what microservices are. When we talk about microservices, we talk about breaking up complicated, potentially really large systems, whatever they may be, into smaller components. We break them into smaller components so we can understand them individually, test them individually, scale them and ultimately change them at a different pace than the rest of the system. You can imagine having to please every master in a monoithic environment can slow/bogg things down and inhibit change. Which as we discussed in the beginning is the key here. We need to be able to work on systems that can change with the rest of the business as it’s getting even more competitive and disruptive.

    One of the keys to this flexibility and ability to change is to focus on autonomy. Systems should be designed to be more autonomous so that changes don’t affect other downstream systems, faults don’t ripple across into cascading failures etc. The more dependencies we have (on other systems, protocols, shared libraries, databases, etc) the harder it can be to make changes. So we talk about services having and owning their own data, chosing the right technology for their function, and conciously enforcing modularity through APIs and contracts.

    Autonomy is key here


    But autonomoy of systems includes autonomoy of teams as well. Microservices can be a means to an end for a company serious about investing into their digital experiecne they provide to customers. It’s not in and of itself the end goal. It’s part of a digital transformation that encompasses all parts of the organization.
  • This concept of defining language, developing models to describe a domain, implementing those models, enforcing assertions, etc all happen within a certain context, and that context is vitally important in software. In common language, we are smart enough to resolve these types of language conflicts within a sentence because of its context. The computer doesn’t have this context. We have to make it explicit. And any context needs to have explicit boundaries.

    This model needs to be “useful” ie, it should be able to be implemented. Try to establish a model that’s both useful for discussion with the domain experts and is implementable. There are infinite ways to model/think about something. Balance both masters with the model you choose.

    Large complex domains may need multiple models. And really the only way to understand a language and model is within a certain context. That context should have boundaries so it doesn’t bleed or force others to bleed definitions and semantics.


    Bounded context: within this space, this is the context of the language. This is what it means and it’s not ambiguous.
    Central thing about a model is the language you create to express the prblem and solution very crisply. Need clear language and need boundaries.

    Anti corruption layers are translations between the different models that may exist in multiple bounded contexts. They keep an internal model consistent and pure without bleeding across the boundaries.

    Bounded contexts tend to be “self contained systems” themselves with a complete vertical stack of the software including UI, business logic, data models, and database. They tend to not share databases across multiple models.
  • This concept of defining language, developing models to describe a domain, implementing those models, enforcing assertions, etc all happen within a certain context, and that context is vitally important in software. In common language, we are smart enough to resolve these types of language conflicts within a sentence because of its context. The computer doesn’t have this context. We have to make it explicit. And any context needs to have explicit boundaries.

    This model needs to be “useful” ie, it should be able to be implemented. Try to establish a model that’s both useful for discussion with the domain experts and is implementable. There are infinite ways to model/think about something. Balance both masters with the model you choose.

    Large complex domains may need multiple models. And really the only way to understand a language and model is within a certain context. That context should have boundaries so it doesn’t bleed or force others to bleed definitions and semantics.


    Bounded context: within this space, this is the context of the language. This is what it means and it’s not ambiguous.
    Central thing about a model is the language you create to express the prblem and solution very crisply. Need clear language and need boundaries.

    Anti corruption layers are translations between the different models that may exist in multiple bounded contexts. They keep an internal model consistent and pure without bleeding across the boundaries.

    Bounded contexts tend to be “self contained systems” themselves with a complete vertical stack of the software including UI, business logic, data models, and database. They tend to not share databases across multiple models.
  • One large database!

    We should focus on how we design our data models so that they can be sharded and distributed…. Focus on transactions, etc not 2PC
  • One large database!

    We should focus on how we design our data models so that they can be sharded and distributed…. Focus on transactions, etc not 2PC
  • One large database!

    We should focus on how we design our data models so that they can be sharded and distributed…. Focus on transactions, etc not 2PC
  • One large database!

    We should focus on how we design our data models so that they can be sharded and distributed…. Focus on transactions, etc not 2PC
  • One large database!

    We should focus on how we design our data models so that they can be sharded and distributed…. Focus on transactions, etc not 2PC
  • One large database!

    We should focus on how we design our data models so that they can be sharded and distributed…. Focus on transactions, etc not 2PC
  • One large database!

    We should focus on how we design our data models so that they can be sharded and distributed…. Focus on transactions, etc not 2PC
  • This concept of defining language, developing models to describe a domain, implementing those models, enforcing assertions, etc all happen within a certain context, and that context is vitally important in software. In common language, we are smart enough to resolve these types of language conflicts within a sentence because of its context. The computer doesn’t have this context. We have to make it explicit. And any context needs to have explicit boundaries.

    This model needs to be “useful” ie, it should be able to be implemented. Try to establish a model that’s both useful for discussion with the domain experts and is implementable. There are infinite ways to model/think about something. Balance both masters with the model you choose.

    Large complex domains may need multiple models. And really the only way to understand a language and model is within a certain context. That context should have boundaries so it doesn’t bleed or force others to bleed definitions and semantics.


    Bounded context: within this space, this is the context of the language. This is what it means and it’s not ambiguous.
    Central thing about a model is the language you create to express the prblem and solution very crisply. Need clear language and need boundaries.

    Anti corruption layers are translations between the different models that may exist in multiple bounded contexts. They keep an internal model consistent and pure without bleeding across the boundaries.

    Bounded contexts tend to be “self contained systems” themselves with a complete vertical stack of the software including UI, business logic, data models, and database. They tend to not share databases across multiple models.
  • In this scenario we may have established our boundaries… our customer profile service has taken an update to customer preferences. A customer and its profile/preferences may be modeled in other services like our recommendation engine, our master customer SOR, our social alerting engine, etc. And we need to update some important information... So the systems that are interested in this data must first be defined and implemented in code ahead of time. Adding a new system requires changes. Additionally, these downstream systems are not transactional... So if there are errors somewhere, then it’s up to the application try and decide what action to take... And while deciding that action, the application could fail.. And no state is stored about where it left off.. And now we’re in an inconsistent state.
  • You could try adding compensation logic and stateful tracking of this locally.. And it’s also great practice to implement idempotent consumers.. The problem with is there could be “read uncommitted” issues like dirty reads or dirty writes that happen downstream because of this, and a compensation now gets much more complicated.
  • We could just try emmitting events and say “whenever this happens over here, we just update a message queue”… now we have to try get consensus between the two systems. This can be expensive, as consensus tends to be, and you also suffer from availability issues..2PC is an anti-availability protocol.
  • 2PC is perfectly fine when consensus is required, though have to consider the drawbacks. 2PC requires operational overhead to manage the TX Log of the tx manager. Also, you can run into issues with deadlocks when holding locks too long. You can also end up in heuristic situations where one side unilaterally rollsback. Now you need human intervention and reconcilliation logic. People poopoo 2PC, but it may be appropriate in some situations..
  • Another situation that will tend to come up is identifying boundaries around IO and read/write patterns. How do we get the writes over to the read database? Do we do 2PC from the application? Do we use a message queue?
  • What about the so called N+1 problem? Where we interact with downstream services, or maybe we take on events and need to enrich them with additional metadata. For example, we may want to group and sort a set of customers that fall into a certain criteria for specific recommendations, and we need to enrich the customer with additional preferences.. So we query for the customer list and then we loop through and enrich each customer.. Can downstream systems sustain this kind of rapid invocation? If they can, are you exposed at all to udnerlying storage incnsistenecies and concurrency issues? Do they just try create bulk APIs? And those APIs are inconsistent across providers (pagination, missed processing of singular elements, etc)
  • So maybe we set up a cache in front of the service to alleviate the penalties of calling downstream services rapidly… and now what sort of stale data can you deal with? Bounded staleness? How do you handle cache eviction?
  • We expect some levels of consistency and we need to be able to withstand faults because we know faults occur… But brewer says we can only pick 2 out of consistency, availability, partition tolerance...
  • Consistency models… the set of allowable histories of operations

    We say that we read what we wrote
  • Now, a process is allowed to read the most recently written value from any process, not just itself.
    The register becomes a place of coordination between two processes; they share state.

    We relax our model and say when we read, we read the value at the time of the read and take into account other processes writes…
  • Now, a process is allowed to read the most recently written value from any process, not just itself.
    The register becomes a place of coordination between two processes; they share state.

    We relax our model and say when we read, we read the value at the time of the read and take into account other processes writes…
  • Somewhere (or some node… a database, a service, a set of databases in a cluster) where there is an appearance of an order that is immediately visible to everyone viewing it.

    Moreover, linearizability’s time bounds guarantee that those changes will be visible to other participants after the operation completes. Hence, linearizability prohibits stale reads
  • Different users will see my message at different times–but each user will see my operations in order. Once seen, a post shouldn’t disappear.
  • Different users will see my message at different times–but each user will see my operations in order. Once seen, a post shouldn’t disappear.
  • Different users will see my message at different times–but each user will see my operations in order. Once seen, a post shouldn’t disappear.
  • Different users will see my message at different times–but each user will see my operations in order. Once seen, a post shouldn’t disappear.
  • Score keeper – needs to read most up to date version of the score… cannot do an eventually consistent read (bounded staleness, consistent prefix, monotonic read).. BUT could do a read my writes read
    Umpire needs to do a strict consistent read to determine at the 9th inning or any afterward whether he can end the game
  • Show a diagram of consistency related to performance..
  • Different users will see my message at different times–but each user will see my operations in order. Once seen, a post shouldn’t disappear.
  • Show a diagram of consistency related to performance..
  • Different users will see my message at different times–but each user will see my operations in order. Once seen, a post shouldn’t disappear.
  • Different users will see my message at different times–but each user will see my operations in order. Once seen, a post shouldn’t disappear.
  • Different users will see my message at different times–but each user will see my operations in order. Once seen, a post shouldn’t disappear.
  • Different users will see my message at different times–but each user will see my operations in order. Once seen, a post shouldn’t disappear.
  • Different users will see my message at different times–but each user will see my operations in order. Once seen, a post shouldn’t disappear.
  • The hardest part of microservices: your data

    1. 1. Full slide deck here: http://bit.ly/ceposta-hardest-part
    2. 2. Twitter: @christianposta Blog: http://blog.christianposta.com Email: christian@redhat.com Christian Posta Principal Architect – Red Hat • Author “Microservices for Java Developers” • Committer/contributor Apache Camel, Apache ActiveMQ, Fabric8.io, Apache Kafka, Debezium.io, et. al. • Worked with large Microservices, web-scale, unicorn company
    3. 3. Free download @ http://developers.redhat.com
    4. 4. People try to copy Netflix, but they can only copy what they see. They copy the results, not the process. Adrian Cockcroft, former Chief Cloud Architect, Netflix
    5. 5. “Microservices” is about optimizing… for speed.
    6. 6. • Maybe it doesn’t matter so much… What we really care about is speed, reduced time to value, and business outcomes. • Maybe a data-driven approach is a better way to answer this question... Are you doing microservices?
    7. 7. • Number of features accepted • % of features completed • User satisfaction • Feature Cycle time • defects discovered after deployment • customer lifetime value (future profit as a result of relationship with the customer) https://en.wikipedia.org/wiki/Customer_lifetime_value • revenue per feature • mean time to recovery • % improvement in SLA • number of changes • number of user complaints, recommendations, suggestions • % favorable rating in surveys • % of users using which features • % reduction in error rates • avg number of tx / user • MANY MORE! Are you doing microservices?
    8. 8. How does your company go fast?
    9. 9. Manage dependencies.
    10. 10. Data is a major dependency.
    11. 11. Wait. What is data?
    12. 12. What is one “thing”?
    13. 13. Book checkout / purchase Title Search Recommendations Weekly reporting
    14. 14. Focus on domain models, not data models • Break things into smaller, understandable models • Surround a model and its “context” with a boundary • Implement the model in code or get a new model • Explicitly map between different contexts • Model transactional boundaries as aggregates
    15. 15. Aggregates • Use the domain to lead you to invariant rules across your domain model • Model the invariants and their associated entities/value objects as “aggregates” • Aggregates focus on transactional boundaries (ie, transactional in the “A” from ACID sense) • Individual aggregates are transactionally consistent • Aggregates use relaxed consistency models between aggregates (ie, something like the Actor model?) • Bounded Contexts use relaxed consistency models between boundaries
    16. 16. Stick with these conveniences as long as you can. Seriously.
    17. 17. But ... • Load/size is too great to fit on one box • Modules/use cases have different read/write characteristics • Queries/joins are getting too complex • Security issues • Lots of conflicting changes to the model/schema • Need denormalized, optimized indexing engines • We can live with eventual consistency (whatever that really means)
    18. 18. From here on out, what we’re saying is “thank you old reliable, awesome database… we’ve got it from here”…
    19. 19. Kinda looks like a combinatorial mess….
    20. 20. “A microservice has its own database”
    21. 21. How do we deal with data in this world?
    22. 22. We need to understand something about the data inside our services and the data outside our services. https://msdn.microsoft.com/en-us/library/ms954587.aspx
    23. 23. Data inside a service
    24. 24. Data inside a service
    25. 25. Data outside a service
    26. 26. Data outside a service
    27. 27. Data outside a service
    28. 28. We’re now building a full-fledged distributed system. Some things to remember…
    29. 29. Plan for failures. Build concepts of time, delay, network, and failures into the design as a first-class citizen.
    30. 30. How do you “read” data and how do you “update” data.
    31. 31. tx.begin() c = retrieveCustomer() c.addNewAddress(address) tx.add(c) tx.commit() publishAddressChange(address, c.id)
    32. 32. tx.begin() c = retrieveCustomer() c.addNewAddress(address) tx.add(c) publishAddressChange(address, c.id) tx.commit()
    33. 33. Separate reads and writes (CQRS)
    34. 34. https://secure.phabricator.com/book/phabcontrib/article/n_plus_one/
    35. 35. https://secure.phabricator.com/book/phabcontrib/article/n_plus_one/ getBulkHats() getBulkHatsForCatsExcept() wellReallyIJustWantCertainHats() justExecuteThisSqlForMe()
    36. 36. https://secure.phabricator.com/book/phabcontrib/article/n_plus_one/
    37. 37. For our reads and writes, we need some “consistency”.
    38. 38. What is consistency? The history of past operations we observe as a reader of the data
    39. 39. We need reads and writes. But we expect failures. This is starting to sound like a distributed-systems theorem I’ve heard…
    40. 40. CAP tells us to pick 2: Consistency, Availability, Partition Tolerance CAP is a bad way to think about this.
    41. 41. Linearizable (strict) consistency CAP - C
    42. 42. Sequential consistency
    43. 43. Monotonic reads consistency
    44. 44. Eventual consistency
    45. 45. Consistency models… https://en.wikipedia.org/wiki/Consistency_model • Strict consistency (Linearizability) • Sequential consistency • Causal consistency • Processor consistency • PRAM consistency (FIFO) • Bounded staleness consistency • Monotonic read consistency • Monotonic write consistency • Read your writes consistency • Eventual consistency
    46. 46. Can we really use relaxed consistency models?
    47. 47. Tradeoffs to make with read consistency and performance
    48. 48. Replicated Data Consistency Explained through Baseball (Doug Terry) https://www.microsoft.com/en-us/research/publication/ replicated-data-consistency-explained-through-baseball/ • What consistency model do you need, depending on what role you’re playing? • What consistency model are you willing to pay for? • Official score keeper? (Linearizability or RMW) • Umpire? (Linearizability) • Sports writer? (Bounded staleness, Eventual consistency) • Radio updates? (Monotonic read, Bounded staleness) • Statistician (Bounded staleness) • Friends in the pub (Eventual consistency)
    49. 49. Replicated Data Consistency Explained through Baseball (Doug Terry) https://www.microsoft.com/en-us/research/publication/ replicated-data-consistency-explained-through-baseball/
    50. 50. Maybe we can use a relaxed consistency model for some of those previously mentioned use cases…
    51. 51. Example relaxing consistency…
    52. 52. Internet companies created their own tools for helping with this. (some opensource!!) • Yelp – MySQL Streamer https://github.com/Yelp/mysql_streamer • LinkedIn – Databus https://github.com/linkedin/databus • Zendesk – Maxwell https://github.com/zendesk/maxwell
    53. 53. Meet debezium.io
    54. 54. Meet debezium.io
    55. 55. WePay uses Debezium https://wecode.wepay.com/posts/streaming-databases-in-realtime-with-mysql-debezium-kafka
    56. 56. Meet debezium.io
    57. 57. Twitter: @christianposta Blog: http://blog.christianposta.com Email: christian@redhat.com Thanks for listening! Time for demo?

    ×