Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

YugaByte DB Internals - Storage Engine and Transactions

522 views

Published on

Presented at NorCal DB Day https://sites.google.com/view/norcaldb18/home

Published in: Software
  • Be the first to comment

YugaByte DB Internals - Storage Engine and Transactions

  1. 1. 1© 2018 All rights reserved. Introducing YugaByte DB Kannan Muthukkaruppan, Co-Founder/CEO Mikhail Bautin, Co-Founder/Architect NorCal DB Day, May 2018
  2. 2. 2© 2018 All rights reserved. About Us Kannan Muthukkaruppan, CEO Nutanix ♦ Facebook ♦ Oracle IIT-Madras, University of California-Berkeley Karthik Ranganathan, CTO Nutanix ♦ Facebook ♦ Microsoft IIT-Madras, University of Texas-Austin Mikhail Bautin, Software Architect ClearStory Data ♦ Facebook ♦ D.E.Shaw Nizhny Novgorod State University, Stony Brook  Founded Feb 2016  Apache HBase committers and early engineers on Apache Cassandra  Built Facebook’s NoSQL platform powered by Apache HBase  Scaled the platform to serve many mission-critical use cases • Facebook Messages (Messenger) • Operational Data Store (Time series Data)  Reassembled the same Facebook team at YugaByte along with engineers from Oracle, Google, Nutanix and LinkedIn Founders
  3. 3. 3© 2018 All rights reserved. What is YugaByte DB? A transactional, high-performance database for building planet-scale cloud services.
  4. 4. 4© 2018 All rights reserved. Why another database?
  5. 5. 5© 2018 All rights reserved. Typical Stack Today Fragile infra with several moving parts Datacenter 1 SQL Master SQL Slave Application Tier (Stateless Microservices) Datacenter 2 SQL for OLTP data Manual sharding Cost: dev team Manual replication Manual failover Cost: ops team NoSQL for other data App aware of data silo Cost: dev team Cache for low latency App does caching Cost: dev team Data inconsistency/loss Fragile infra Hours of debugging Cost: dev + ops team
  6. 6. 6© 2018 All rights reserved. Does AWS change this? Datacenter 1 SQL Master SQL Slave Datacenter 2 Elasticache Aurora DynamoDB Still Complex it’s the same architecture Application Tier (Stateless Microservices)
  7. 7. 7© 2018 All rights reserved. TRANSACTIONAL PLANET-SCALEHIGH PERFORMANCE Distributed ACID Transactions Document-Based, Strongly Consistent Storage Low Latency, Tunable Reads High Throughput OPEN SOURCE Apache 2.0 Popular APIs Extended Apache Cassandra, Redis and PostgreSQL (BETA) Auto Sharding & Rebalancing Global Data Distribution YugaByte DB CLOUD-NATIVE Built For The Container Era Self-Healing, Fault-Tolerant
  8. 8. 8© 2018 All rights reserved. Architecture tablet 1’ Portable across clouds tablet3-leader tablet2-leader tablet1-leader … … … tablet2-follower tablet2-follower tablet3-follower tablet3-follower tablet1-follower tablet1-follower SMACK Apps … Mature ecosystems tablet 1’ tablet 1’ tablet 1’ DocDB Storage Transactional key-document store, based on a heavily customized version of RocksDB Raft-Based Replication Highly resilient, used for both data replication & leader election node1 node2 node3 Flexible storage engine with single- row & multi-row ACID txns Transaction Manager Tracks ACID txns across multi-row ops, incl. clock skew mgmt. tablet1-leader tablet2-leader tablet3-leader tablet2-follower tablet3-follower tablet2-follower tablet3-follower tablet1-follower tablet1-follower … …… Automated Sharding & Load Balancing Popular APIs extended for app dev agility YCQL Cassandra-compatible YEDIS Redis-compatible BETA
  9. 9. 9© 2018 All rights reserved. Architecture
  10. 10. 10© 2018 All rights reserved. Design Goals ✓ Highly scalable & resilient ✓ Transactional - strong consistency ✓ All layers in C++ for high performance ✓ No dependencies on external systems ✓ Cloud-native – online re-configuration
  11. 11. 11© 2018 All rights reserved. Consistency Goals – similar to Google Spanner CAP Consistency Partition Tolerant HA on failures – new leader elected in seconds PACELC No failure: Low latency On failure: Trade off latency for consistency
  12. 12. 12© 2018 All rights reserved. API Goals – similar to Azure Cosmos DB ✓ Multi-model ✓ Start with well known APIs ✓ Extend to fill functionality gaps ✓ APIs supported Cassandra Query Language Redis PostgreSQL in works
  13. 13. 13© 2018 All rights reserved. ACID Transactions Globally Consistent SQL API only Not Transactional Multi-Model High Performance Best of Cloud-Native Meets Open Source Not Globally Consistent Lower Performance
  14. 14. 14© 2018 All rights reserved. DB Features SQL Strong consistency Secondary indexes ACID transactions Expressive query language NoSQL Tunable read latency Write optimized for large data sets Data expiry with TTL Scale out and fault tolerant
  15. 15. 15© 2018 All rights reserved. Distributed ACID Transactions Multi-Row/Multi-Shard Operations At Any Scale YCQL
  16. 16. 16© 2018 All rights reserved. Native JSON Data Type Modeling document & flexible schema use-cases YCQL
  17. 17. 17© 2018 All rights reserved. Auto Data Expiry with TTL Database tracks and expires older data YCQL YEDIS Query the key right away Query the key after 10 seconds Write a key with a 10 second expiry
  18. 18. 18© 2018 All rights reserved.
  19. 19. 19© 2018 All rights reserved. Data Persistence in DocDB • DocDB is YugaByte DB’s LSM storage engine • Persistent key to data-structure store keys = ordered (composed of hash and range components) values = primitive (int32, double, etc.) or objects (maps, nested maps) • Extends and enhances RocksDB
  20. 20. 20© 2018 All rights reserved. DocDB: A Key-to-Object/Document Store • Document key = CQL/SQL primary key or Redis key • Documents = CQL / SQL rows and Redis data structures
  21. 21. 21© 2018 All rights reserved. DocDB: A Key-to-Object/Document Store Generated RocksDB keys have this format: DocKey, SubKey1, …, SubKeyN, Timestamp -> Value “Subkeys”: e.g. CQL/SQL column, Redis map key, etc. INSERT INTO products (prod_id, attrs, price) VALUES ('p1', {'h' : 7, 'w': 7}, 99) DocKey('p1'), HybridTime(1526000000) -> {} DocKey('p1'), ColumnId(attrs), HybridTime(1526000000) -> {} DocKey('p1'), ColumnId(attrs), 'h', HybridTime(1526000000) -> 7 DocKey('p1'), ColumnId(attrs), 'w', HybridTime(1526000000) -> 5 DocKey('p1'), ColumnId(price), HybridTime(1526000000) -> 99
  22. 22. 22© 2018 All rights reserved. Some of the RocksDB enhancements • WAL and MVCC enhancements o Removed RocksDB WAL, re-uses Raft log o MVCC at a higher layer o Coordinate RocksDB memstore flushing and Raft log garbage collection • File format changes o Sharded (multi-level) indexes and Bloom filters • Splitting data blocks & metadata into separate files for tiering support • Separate queues for large and small compactions
  23. 23. 23© 2018 All rights reserved. More Enhancements to RocksDB • Data model aware Bloom filters • Per-SSTable key range metadata to optimize range queries • Server-global block caches & memstore limits • Scan-resistant block cache (single-touch and multi-touch)
  24. 24. 24© 2018 All rights reserved. Raft Related Enhancements • Leader Leases • Leader Balancing • Group Commits • Observer Nodes / Read Replicas (Tunable Read Consistency)
  25. 25. 25© 2018 All rights reserved. Raft Extension: Leader Leases Tablet Peer (old leader) Tablet Peer (new leader) Tablet Peer (follower) x=10 x=10 x=10 Network partition Client writes x=20, and the new leader replicates it Client Without leader leases: the client can still reach the old leader, read x=10 1 2 4 3x=20 x=20
  26. 26. 26© 2018 All rights reserved. Raft Extension: Leader Leases TimeTablet Server 1 is the leader of a tablet Leader lease Tablet server 2 becomes leader, cannot take load until the old leader’s lease expires Tablet Server 2 is a follower Tablet Server 1 Tablet Server 2
  27. 27. 27© 2018 All rights reserved. Highly Scalable Source: https://blog.yugabyte.com/scaling-yugabyte-db-to-millions-of-reads-and-writes-fb86cea5ff15 • Stress tested up to 50 node cluster sizes • Scales linearly • Automatic sharding and load balancing
  28. 28. 28© 2018 All rights reserved. High Performance and Data Density Source: https://blog.yugabyte.com/building-a-strongly-consistent-cassandra-with-better-performance-aa96b1ab51d6 • Better than the most performant NoSQL DBs • 2x-5x better performance vs Cassandra
  29. 29. 29© 2018 All rights reserved. Transactions
  30. 30. 30© 2018 All rights reserved. Single Shard Transactions Raft Consensus Protocol . . . INSERT INTO t SET x=10 IF NOT EXISTS Lock Manager (in memory, on leader only) Acquire a lock on x DocDB / RocksDB Read current value of x Submit a Raft operation for replication: set x=10 at hybrid_time 100 Raft log Tablet follower Tablet follower Replicate to majority of tablet peers Apply to RocksDB and release lock x=10 @ht=100 1 2 5 3 4
  31. 31. 31© 2018 All rights reserved. MVCC based on Hybrid Time • HybridTime is an always increasing cluster-wide timestamp http://users.ece.utexas.edu/~garg/pdslab/david/hybrid-time-tech-report-01.pdf • Every RocksDB key includes a HybridTime at the end • Allows reads at a particular snapshot without locking • Compactions: o Overwritten/deleted entries are garbage-collected as soon as all read operations at old timestamps are done o TTL-expired entries turn into delete markers o On minor compactions, delete markers have to be kept!
  32. 32. 32© 2018 All rights reserved. Single Shard Transactions • HybridTime values are strictly increasing in each tablet • Each tablet maintains a “safe time” that is used for reads o Highest timestamp such that the view as of that timestamp is fixed o In the common case it is just before the hybrid time of the next uncommitted record in the tablet
  33. 33. 33© 2018 All rights reserved. Distributed Transactions • A fully decentralized architecture • Every tablet server can act as a Transaction Manager • A distributed transaction status table • Every transaction is assigned to a status tablet
  34. 34. 34© 2018 All rights reserved. Distributed Transactions – Write Path
  35. 35. 35© 2018 All rights reserved. Distributed Transactions – Write Path Step 1: Client request
  36. 36. 36© 2018 All rights reserved. Distributed Transactions – Write Path Step 2: Create status record
  37. 37. 37© 2018 All rights reserved. Distributed Transactions – Write Path Step 2: Create status record
  38. 38. 38© 2018 All rights reserved. Distributed Transactions – Write Path Step 3: Write provisional records
  39. 39. 39© 2018 All rights reserved. Distributed Transactions – Write Path Step 4: Atomic commit
  40. 40. 40© 2018 All rights reserved. Distributed Transactions – Write Path Step 5: Respond to client
  41. 41. 41© 2018 All rights reserved. Distributed Transactions – Write Path Step 6: Apply provisional records
  42. 42. 42© 2018 All rights reserved. Isolation Levels • Currently Snapshot Isolation is supported o Write-write conflicts detected when writing provisional records • Serializable isolation (roadmap) o Reads in RW txns also need provisional records • Read-only transactions are always lock-free
  43. 43. 43© 2018 All rights reserved. Clock Skew and Read Restarts • Need to ensure the read timestamp is high enough o Committed records the client might have seen must be visible • Optimistically use current Hybrid Time, re-read if necessary o Reads are restarted if a record with a higher timestamp that the client could have seen is encountered o Read restart happens at most once per tablet o Relying on bounded clock skew (NTP, AWS Time Sync) • Only affects multi-row reads of frequently updated records
  44. 44. 44© 2018 All rights reserved. Distributed Transactions – Read Path
  45. 45. 45© 2018 All rights reserved. Distributed Transactions – Read Path Step 1: Client request; pick ht_read
  46. 46. 46© 2018 All rights reserved. Distributed Transactions – Read Path Step 2: Read from tablet servers
  47. 47. 47© 2018 All rights reserved. Distributed Transactions – Read Path Step 3: Resolve txn status
  48. 48. 48© 2018 All rights reserved. Distributed Transactions – Read Path Step 4: Respond to YQL Engine
  49. 49. 49© 2018 All rights reserved. Distributed Transactions – Read Path Step 5: Respond to client
  50. 50. 50© 2018 All rights reserved. Distributed Transactions – Conflicts & Retries • Every transaction is assigned a random priority • In a conflict, the higher-priority transaction wins o The restarted transaction gets a new random priority o Probability of success quickly increases with retries • Restarting a transaction is the same as starting a new one • A read-write transaction can be subject to read-restart
  51. 51. 51© 2018 All rights reserved. Questions? Try it at docs.yugabyte.com/quick-start

×