• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Google Megastore
 

Google Megastore

on

  • 2,043 views

 

Statistics

Views

Total Views
2,043
Views on SlideShare
1,871
Embed Views
172

Actions

Likes
5
Downloads
56
Comments
0

13 Embeds 172

http://bardofschool.blogspot.com 149
http://bardofschool.blogspot.in 8
http://bardofschool.blogspot.fr 3
http://bardofschool.blogspot.co.uk 2
http://bardofschool.blogspot.de 2
http://bardofschool.blogspot.pt 1
http://bardofschool.blogspot.com.es 1
http://bardofschool.blogspot.ca 1
http://bardofschool.blogspot.it 1
http://bardofschool.blogspot.com.br 1
http://bardofschool.blogspot.co.il 1
http://bardofschool.blogspot.jp 1
http://bardofschool.blogspot.tw 1
More...

Accessibility

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • 1. Query Local : Query the local replica's coordinator to determine if the entity group is up-to-date locally. 2. Find Position : Determine the highest possibly-committed log position, and select a replica that has applied through that log position. (a) ( Local read ) If step 1 indicates that the local replica is up-to-date, read the highest accepted log position and timestamp from the local replica. (b) ( Majority read ) If the local replica is not up-to-date (or if step 1 or step 2a times out), read from a majority of replicas to nd the maximum log position that any replica has seen, and pick a replica to read from. We select the most responsive or up-to-date replica, not always the local replica. 3. Catchup : As soon as a replica is selected, catch it up to the maximum known log position as follows: (a) For each log position in which the selected replica does not know the consensus value, read the value from another replica. For any log positions without a known-committed value available, invoke Paxos to propose a no-op write. Paxos will drive a majority of replicas to converge on a single value|either the no-op or a previously proposed write. (b) Sequentially apply the consensus value of all unapplied log positions to advance the replica's state to the distributed consensus state.In the event of failure, retry on another replica. 4. Validate : If the local replica was selected and was not previously up-to-date, send the coordinator a validate message asserting that the ( entity group; replica ) pair reflects all committed writes. Do not wait for a reply if the request fails, the next read will retry. 5. Query Data : Read the selected replica using the timestamp of the selected log position. If the selected replica becomes unavailable, pick an alternate replica, perform catchup, and read from it instead. The results of a single large query may be assembled transparently from multiple replicas. Note that in practice 1 and 2a are executed in parallel.
  • 1. Accept Leader : Ask the leader to accept the value as proposal number zero. If successful, skip to step 3. 2. Prepare : Run the Paxos Prepare phase at all replicas with a higher proposal number than any seen so far at this log position. Replace the value being written with the highest-numbered proposal discovered, if any. 3. Accept : Ask remaining replicas to accept the value. If this fails on a majority of replicas, return to step 2 after a randomized backoff. 4. Invalidate : Invalidate the coordinator at all full replicas that did not accept the value. Fault handling at this step is described in Section 4.7 below. 5. Apply : Apply the value's mutations at as many replicas as possible. If the chosen value differs from that originally proposed, return a conflict error.

Google Megastore Google Megastore Presentation Transcript

  • Megastore - Providing Scalable, Highly Available J. Baker, C. Bond, J.C. Corbett, JJ Furman, A. Khorlin, J. Larson, J-M Léon, Y. Li, A. Lloyd, V. Yushprakh Google Inc. [email_address] May. 2011
  • Agenda
    • Motivation
    • Architecture
    • ACID over NOSQL Database
    • Replication via Paxos
    • Operational Results
  • Motivation
    • Build a system to please everyone (users, admins, developers).
  • Motivation
    • High availability – Fully functional during planned maintenance periods, as well as most unplanned infrastructure issues.
    • Scalability – Service huge audience of potential users.
    • ACID – Easier for writing and deploying applications.
  • Agenda
    • Motivation
    • Megastore Architecture
    • ACID over NOSQL Database
    • Replication via Paxos
    • Operational Results
  • Megastore Overview
    • Widely deployed in Google for several years.
    • Used on more than 100 production applications.
    • Handles more than 3 billion write and 20 billion read transactions daily.
    • Stores nearly a petabyte of primary data across many global datacenters.
    • Available on GAE since Jan 2011.
  • Architecture
    • Built on top of Bigtable and Chubby.
    • Blends the scalability of a NoSQL datastore with the convenience of a traditional RDBMS
    • Synchronous replication based on Paxos across datacenters.
  • Architecture
  • Architecture
    • Scalable replication.
  • Architecture
    • Operation across Entity Groups
  • Agenda
    • Motivation
    • Megastore Architecture
    • ACID over NOSQL Database
    • Replication via Paxos
    • Operational Results
  • Data Model
    • Somewhere between RDBMS and row-column storage of NOSQL.
      • Schemas
      • Tables (Entity group root table/child table, child table must have a single distinguished foreign key referencing root table)
      • Entities
      • Properties
  • Sample Schema
  • Mapping to Bigtable
    • Primary Keys are chosen to cluster entities that will be read together.
    • Each entity is mapped into a single Bigtable row.
    • “ IN TABLE” instructs to colocate tables into the same Bigtable, and key ordering ensures Photo entities are stored adjacent to corresponding User.
    • Bigtable column name = Megastore table name + property name
  • Indexes
    • Two level of indexes:
      • Local index: Separate indexes for each entity group. Stored in entity group and updated atomically and consistently.
      • Global index: Span entity groups. Not guaranteed to reflect all recent updates.
  • Transactions & Concurrency
    • Entity group is a mini-database providing serializable ACID semantics.
    • MVCC (MultiVersion Concurrency Control) using transaction timestamp
    • Reads and Writes are isolated
  • Transactions & Concurrency
    • Three level of reads consistency
      • Current: apply all previous committed logs before read within a single entity group.
      • Snapshot: pick the last known fully applied transaction to read, within a single entity group.
      • Inconsistent: ignore the state of log and read the latest value directly.
  • Transactions & Concurrency
    • Write transaction:
      • Current read: Obtain the timestamp and log position of the last committed transaction.
      • Application logic: Read from Bigtable and gather writes into a log entry.
      • Commit: Use Paxos to achieve consensus for appending the log entry to log.
      • Apply: Write mutations to the entities and indexes in Bigtable.
      • Clean up: Delete temp data.
  • Transactions & Concurrency
    • Queues provide transactional messaging between entity groups. Declaring a queue automatically creates an inbox on each entity group (scale automatically).
    • Two phase commit
    • Queue is recommended over two phase commit.
  • Agenda
    • Motivation
    • Megastore Architecture
    • ACID over NOSQL Database
    • Replication via Paxos
    • Operational Results
  • Paxos
    • Basic Paxos
    • Multi-Paxos
  • Reads
  • Writes
  • Failure Detection
    • Coordinators obtain specific Chubby locks in remote datacenters at startup.
    • If it ever loses a majority of its locks from a crash or network partition, it will consider all entity groups in its purview to be out-of-date.
    • reads at the replica must query the log position from a majority of replicas until the locks are regained and its coordinator entries are revalidated.
    • all writers must wait for the coordinator's Chubby locks to expire before writes can complete
  • Agenda
    • Motivation
    • Megastore Architecture
    • ACID over NOSQL Database
    • Replication via Paxos
    • Operational Results
  • Distribution of Availability
  • Distribution of Average Latencies
  • Conclusion
    • Most users see five nines availability
    • Average read latencies are tens of milliseconds, indicating most reads are local.
    • Most writes costs 100-400 milliseconds.
  • Questions?