Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Microservices on top of kafka

756 views

Published on

How we got lost during the transition to Microservices, and how we found our way out using Kafka

Published in: Engineering
  • Be the first to comment

Microservices on top of kafka

  1. 1. Microservices on top of Kafka How we got lost during the transition to Microservices, and how we found our way out using Kafka Vladi Feigin Software Architect, LivePerson vladif@liveperson.com
  2. 2. “This is an attempt to share our experience in breaking a monolith and moving to a microservices world” … Using Kafka Preface
  3. 3. Active customers Mission critical application Complex business logic Very high uptime SLA The Monolith
  4. 4. The Decision
  5. 5. Project “Thor”
  6. 6. Develop a new system from scratch OR Gradually break the existing monolith We believe the second option is the way to go! How can it be broken?
  7. 7. Monolith First app out (shadow mode) Monolith First app out Monolith without app Gradual progress Events to Kafka. Tests! App out in shadow mode. Tests! App out in production mode The Gradual Approach
  8. 8. TODO List Microservices on Kafka
  9. 9. Define every service responsibility Design services behavior first (the actions it does) Define Data Model (see next slide) Apply Domain Driven Design principles Revise Data Model
  10. 10. Command (Request) is an action, addressed to, and consumed by, one single service Fact is a declaration to the rest of world about the service state change (triggered by a command). Entities - the business objects Define Commands, Facts and Entities
  11. 11. A schema defines the language the services use to talk to each other It can be JSON, Avro or Protobuf Define and enforce rules for schema compatibility Every schema change must be validated Use schema
  12. 12. Define business flows
  13. 13. Separate the write from the read operations Facts written chronologically into dedicated Kafka (event-sourced) topics Create read-optimized views in external DB for queries Use Event Sourcing and CQRS
  14. 14. Every service is allowed to write only on the topics it owns Services are allowed to read from other service topics Follow Single Writer Principle
  15. 15. Service topics should reflect the service data model Topics are integral to all business flows Every service usually has its own topic Topic design is similar to the designing DB table Carefully design Kafka topics
  16. 16. ● Validate events before you write them to Kafka ● Validate against schema ● Validate against local state Validate, Validate, Validate!
  17. 17. Data crucial for managing the core business flows should be managed in Kafka Avoid having other “moving parts”, such as a database for managing the core business flow Make your critical data mobility simple Kafka is Single Source of Truth
  18. 18. Your application should be able to reprocess historical data from Kafka This means reprocessing facts executed from the event-sourced topics Design for reprocessing
  19. 19. Monitor You must have full visibility over what’s going on in the system, from the very beginning of the process Constantly look for unexpected behaviour and anomalies
  20. 20. Be a smart pessimist Be prepared for non-happy and edge case scenarios! List all possible difficult scenarios and use your architecture to test them Ensure you have a solution for every scenario
  21. 21. Postscript Microservices Cons: ● Microservices is hard and challenging ● Operational costs are high ● Hard to debug in production Microservices Pros: ● Unleash development speed ● Services scale-out is easier ● Clear roles and responsibilities of services
  22. 22. LivePerson is hiring! In Tel-Aviv and Raanana https://www.liveperson.com/company/careers We’re hiring
  23. 23. Appendix
  24. 24. Consider Kafka Compacted topics Long topic retention Deletion by fact key Smaller local state Custom retention mechanisms The only option for supporting GDPR
  25. 25. Producer parameters: ● acks=all ● Many retries on failure If events order is critical set : ● max.in.flight.requests.per.connection=1 If your data throughput allow - use synchronous send Server configuration: ● unclean.leader.election.enable=false ● min.isr=2 ● replication factor at least 3 Configuration

×