Bigtable

9,532 views

Published on

Published in: Technology, News & Politics
0 Comments
26 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
9,532
On SlideShare
0
From Embeds
0
Number of Embeds
417
Actions
Shares
0
Downloads
0
Comments
0
Likes
26
Embeds 0
No embeds

No notes for slide

Bigtable

  1. 1. Google Bigtable Fay Chang, Jeffrey Dean, Sanjay Ghemawat, Wilson C. Hsieh, Deborah A. Wallach, Mike Burrows, Tushar Chandra, Andrew Fikes, Robert E. Gruber Google, Inc. UWCS OS Seminar Discussion Erik Paulson 2 October 2006 See also the (other)UW presentation by Jeff Dean in September of 2005 (See the link on the seminar page, or just google for “google bigtable”)
  2. 2. Before we begin… <ul><li>Intersection of databases and distributed systems </li></ul><ul><li>Will try to explain (or at least warn) when we hit a patch of database </li></ul><ul><li>Remember this is a discussion! </li></ul>
  3. 3. Google Scale <ul><li>Lots of data </li></ul><ul><ul><li>Copies of the web, satellite data, user data, email and USENET, Subversion backing store </li></ul></ul><ul><li>Many incoming requests </li></ul><ul><li>No commercial system big enough </li></ul><ul><ul><li>Couldn’t afford it if there was one </li></ul></ul><ul><ul><li>Might not have made appropriate design choices </li></ul></ul><ul><li>Firm believers in the End-to-End argument </li></ul><ul><li>450,000 machines (NYTimes estimate, June 14 th 2006 </li></ul>
  4. 4. Building Blocks <ul><li>Scheduler (Google WorkQueue) </li></ul><ul><li>Google Filesystem </li></ul><ul><li>Chubby Lock service </li></ul><ul><li>Two other pieces helpful but not required </li></ul><ul><ul><li>Sawzall </li></ul></ul><ul><ul><li>MapReduce (despite what the Internet says) </li></ul></ul><ul><li>BigTable: build a more application-friendly storage service using these parts </li></ul>
  5. 5. Google File System <ul><li>Large-scale distributed “filesystem” </li></ul><ul><li>Master: responsible for metadata </li></ul><ul><li>Chunk servers: responsible for reading and writing large chunks of data </li></ul><ul><li>Chunks replicated on 3 machines, master responsible for ensuring replicas exist </li></ul><ul><li>OSDI ’04 Paper </li></ul>
  6. 6. Chubby <ul><li>{lock/file/name} service </li></ul><ul><li>Coarse-grained locks, can store small amount of data in a lock </li></ul><ul><li>5 replicas, need a majority vote to be active </li></ul><ul><li>Also an OSDI ’06 Paper </li></ul>
  7. 7. Data model: a big map <ul><li><Row, Column, Timestamp> triple for key - lookup, insert, and delete API </li></ul><ul><li>Arbitrary “columns” on a row-by-row basis </li></ul><ul><ul><li>Column family:qualifier. Family is heavyweight, qualifier lightweight </li></ul></ul><ul><ul><li>Column-oriented physical store- rows are sparse! </li></ul></ul><ul><li>Does not support a relational model </li></ul><ul><ul><li>No table-wide integrity constraints </li></ul></ul><ul><ul><li>No multirow transactions </li></ul></ul>
  8. 8. SSTable <ul><li>Immutable, sorted file of key-value pairs </li></ul><ul><li>Chunks of data plus an index </li></ul><ul><ul><li>Index is of block ranges, not values </li></ul></ul>Index 64K block 64K block 64K block SSTable
  9. 9. Tablet <ul><li>Contains some range of rows of the table </li></ul><ul><li>Built out of multiple SSTables </li></ul>Index 64K block 64K block 64K block SSTable Index 64K block 64K block 64K block SSTable Tablet Start:aardvark End:apple
  10. 10. Table <ul><li>Multiple tablets make up the table </li></ul><ul><li>SSTables can be shared </li></ul><ul><li>Tablets do not overlap, SSTables can overlap </li></ul>SSTable SSTable SSTable SSTable Tablet aardvark apple Tablet apple_two_E boat
  11. 11. Finding a tablet
  12. 12. Servers <ul><li>Tablet servers manage tablets, multiple tablets per server. Each tablet is 100-200 megs </li></ul><ul><ul><li>Each tablet lives at only one server </li></ul></ul><ul><ul><li>Tablet server splits tablets that get too big </li></ul></ul><ul><li>Master responsible for load balancing and fault tolerance </li></ul><ul><ul><li>Use Chubby to monitor health of tablet servers, restart failed servers </li></ul></ul><ul><ul><li>GFS replicates data. Prefer to start tablet server on same machine that the data is already at </li></ul></ul>
  13. 13. Editing a table <ul><li>Mutations are logged, then applied to an in-memory version </li></ul><ul><li>Logfile stored in GFS </li></ul>SSTable SSTable Tablet apple_two_E boat Insert Insert Delete Insert Delete Insert Memtable
  14. 14. Compactions <ul><li>Minor compaction – convert the memtable into an SSTable </li></ul><ul><ul><li>Reduce memory usage </li></ul></ul><ul><ul><li>Reduce log traffic on restart </li></ul></ul><ul><li>Merging compaction </li></ul><ul><ul><li>Reduce number of SSTables </li></ul></ul><ul><ul><li>Good place to apply policy “keep only N versions” </li></ul></ul><ul><li>Major compaction </li></ul><ul><ul><li>Merging compaction that results in only one SSTable </li></ul></ul><ul><ul><li>No deletion records, only live data </li></ul></ul>
  15. 15. Locality Groups <ul><li>Group column families together into an SSTable </li></ul><ul><ul><li>Avoid mingling data, ie page contents and page metadata </li></ul></ul><ul><ul><li>Can keep some groups all in memory </li></ul></ul><ul><li>Can compress locality groups </li></ul><ul><li>Bloom Filters on locality groups – avoid searching SSTable </li></ul>
  16. 16. Microbenchmarks
  17. 18. Application at Google
  18. 19. Lessons learned <ul><li>Interesting point- only implement some of the requirements, since the last is probably not needed </li></ul><ul><li>Many types of failure possible </li></ul><ul><li>Big systems need proper systems-level monitoring </li></ul><ul><li>Value simple designs </li></ul>

×