Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Lessons learnt building a Distributed Linked List on S3

68 views

Published on

Lessons learnt building a Distributed Linked List on S3 conducted at AWS Community Day, Bangalore 2019

Published in: Technology
  • Be the first to comment

  • Be the first to like this

Lessons learnt building a Distributed Linked List on S3

  1. 1. Lessons Learnt building a Distributed Linked List on S3 @theManikJindal | Adobe
  2. 2. I am HE @theManikJindal Computer Scientist, Adobe P.S. You get a chocolate if you ask a question.
  3. 3. What is a Distributed Linked List? • Linked List: An ordered set of nodes, each node containing a link to the next one. • Distributed Linked List: • Data (nodes) is stored on multiple machines for durability and to allow reading at scale. • Multiple compute instances write concurrently for resiliency and to write at scale.
  4. 4. What’s the problem we’re trying to solve? • Adobe I/O Events enables our customers to create event subscriptions for events generated by products and services across Adobe. • Our customers can consume events via Webhook PUSH or PULL them from the Journaling API. • A Journal, is an ordered list of events - much like a ledger or a log where new entries (events) are added to the end of the ledger and the ledger keeps growing.
  5. 5. What’s the problem we’re trying to solve? • Each event subscription has a corresponding Journal – and every event that we receive needs to be written to the correct Journal. • The order of events once written into the Journal cannot be changed. Newer events can only be added to the end. A client application reads the events in this order. • We have millions of events per hour (and counting) and over a few thousand subscriptions – we also need to operate in near real time.
  6. 6. Approach-1: We stored everything in a Database • We stored all the events in a document database (Azure Cosmos DB). • We ordered the events by an auto-increment id. • We partitioned the events’ data in the database by the subscription id. I slept at night because we did not order the events by timestamps.
  7. 7. Approach-1: FAILED • Reading the events was very expensive. It required database lookups, ordering and data transfer of a few hundred KBs per request. • Our clients could not read events at scale - we were not able to handle more than a dozen clients reading concurrently. • Incident: A single client brought us down in a day by publishing large events frequently enough. (We hit Cosmos DB’s 10 GB partition limit.)
  8. 8. Approach-1: Lessons Learnt • Partition Policy: Do not partition data by something that a client has control over. (Example: user_id, client_id, etc.) • Databases were not the right fit for storing messages ranging from a few KBs to hundreds of KBs. • RTFM: Cosmos DB's pricing is based on reserved capacity in terms of a strange unit - "RU/s". There wasn't a way to tell how much we'll need.
  9. 9. Approach-2: We used a mix of Database & Blob Storage • We moved out just the event payloads to a blob storage (AWS S3), to quickly rectify the partition limit problem. • We also batched the event writes to reduce our storage costs.
  10. 10. Approach-2: Failed • Reading events was still very expensive. It required database lookups, ordering and then up to a hundred S3 GETs per request. • Due to batching, writing events was actually cheaper than reading them. • Our clients still could not read events at scale. Even after storing the event payload separately the Database continued to be the chokepoint.
  11. 11. Approach-2: Lessons Learnt • Databases were not the right fit - we never updated any records and only deleted them based on a TTL policy. • Because we used a managed database service, we were able to scale event reads for the time being by throwing money at the problem. • Batching needed to be optimized for reading and not writing. The system writes once, clients read at least once.
  12. 12. Searching for a Solution: What do we know? • A database should not be involved in the critical path of reading events. Because a single partition will not scale and there is no partitioning policy that can distribute loads effectively. • Blob storage (AWS S3) is ideal for storing event payloads, but it is costly. So, we will have to batch the events to save on costs, but we should batch them in a way that we optimize reading costs.
  13. 13. Searching for a Solution: Distributed Linked List • Let’s build a linked list of S3 objects. Each S3 object not only contains a batch of events but also contains information to find the next S3 object on the list. • Because each S3 object knows where the next S3 object is, we will eliminate database from the critical path of reading events and then reading events can potentially scale as much as S3 itself. • Because we will batch to optimize reading costs, the cost of reading events could be as low as S3 GET requests ($4 per 10 million requests).
  14. 14. Distributed Linked List – The Hard Problem • From a system architecture standpoint, we needed multiple compute instances to process the incoming stream of events - for both resiliency and to write events at scale. • However, we had a hard problem on our hands, we needed to order the writes coming from multiple compute instances – otherwise we’d risk corrupting the data. Well, I thought, how hard could it be? We could always acquire a lock...
  15. 15. Distributed Linked List – Distributed Locking • To order the writes from multiple compute instances, we used a distributed lock on Redis. A compute instance would acquire a lock to insert into the list. • Turned out that distributed locks could not always guarantee mutual exclusion. If the lock failed the results would have been catastrophic, and hence, we could not take a chance – howsoever small. • To rectify this we explored ZooKeeper and sought help to find a lock that could work for us – in vain. A good read on distributed locks by Martin Kleppmann - https://martin.kleppmann.com/2016/02/08/how-to-do-distributed-locking.html
  16. 16. Distributed Linked List – Zookeeper No major cloud provider provides a managed ZooKeeper service. If you want us to run ZooKeeper ourselves, we will need another full time DevOps engineer. Sathyajith Bhat (Senior DevOps Engineer)
  17. 17. Distributed Linked List – Say No to Distributed Locks The difference between ‘a distributed lock’ and ‘a distributed lock that works in practice and is on the critical path for high throughput writes and magically is not the bottleneck’ is ~50 person years. [sic] Michael Marth (Director, Engineering)
  18. 18. Distributed Linked List – A Lock Free Solution • In the first step we batch and store the event payloads in blob storage (S3), and then order the different batches using an auto-increment id in database (MySQL). This step can be performed independently by each compute instance. • In the second step we transfer the database order to blob storage. The algorithm to do so is idempotent and, hence, works lock free with multiple compute instances. We even mathematically proved the algorithm for correctness.
  19. 19. Inspiring Architecture Design – AWS SQS • SQS ensures that all messages are processed by the means of client acknowledgement. This allowed us to not worry about losing any data and we based our internal event stream on an SQS-like queue. • SQS also helps distributes load amongst the worker, which came in really handy for us to scale in and out depending on the traffic.
  20. 20. Inspiring Algorithm Design – AWS RDS MySQL • We love managed services and RDS was a perfect choice for us – guaranteeing both durability and performance. • We use an auto-increment id to order writes. Even though MySQL’s auto increment id is very performant, we still reduced the number of times we used it. We did so by only inserting a row in the table for a batch of events and not for every event. • We also made sure to delete older rows that have already been processed. This way any internal locking that MySQL performs has lesser rows to contend over.
  21. 21. Inspiring Algorithm Design – AWS S3 • An S3 object cannot be partially updated or appended to. • Hence, we could not construct a linked list in the traditional sense, i.e., update the pointer to the next node. • Instead, we had to know the location of the next S3 object in advance.
  22. 22. Inspiring API Design – AWS S3 • Not only does AWS S3 charge by the number of requests, even the performance limits on a partition are defined in terms of the number of requests/second. • Originally, in an API call a client could specify the number of events it wanted to consume. However, this simple feature took away our control over the number of S3 GET requests we in-turn had to make. • We changed the API definition so that a client can now only specify a limit on the number of events and the API is allowed to give them a lesser number of events.
  23. 23. Inspiring Architecture Design – AWS S3 • AWS S3 partitions the data using object key prefixes and the performance limits on S3 are also defined on a per partition level. • Because we wanted to maximize the performance we spread out our object keys and used completely random GUIDs as the object keys. • Due to performance considerations, we also added the capability to add more S3 buckets at runtime so the system can start distributing load over them, if needed.
  24. 24. Results – Performance Batching 60% S3 Writes 8% Database 2% Time Delay 30% Average Time Taken to write a batch of Events ~ 2 seconds S3 Reads 67%Auth 33% Average Time Taken to read a batch of Events ~ 0.06 seconds
  25. 25. Results – Costs 11.11 4.50 10.80 26.41 0.72 7.20 0.29 8.21 0.00 5.00 10.00 15.00 20.00 25.00 30.00 DB + Redis S3 Writes S3 Reads Total Cost of ingesting 1M events/min for 100 clients. ($/hour) Approach-2 Now
  26. 26. Conclusion: What did we learn? • Always understand the problem you’re trying to solve. Find out how the system is intended to be used and design accordingly. Also, find out the aspects in which the system is allowed to be less than perfect – for us it was being near real time. • Do no think of a system architecture and then try to fit various services in it later, rather design the architecture in the stride of the underlying components you use. • Listen to experienced folks to cut losses early. (Distributed Locks, ZooKeeper)
  27. 27. Find me everywhere as @theManikJindal Please spare two minutes for feedback https://tinyurl.com/tmj-dll

×