Building Distributed Systems With Riak and Riak Core
Upcoming SlideShare
Loading in...5
×

Like this? Share it with your network

Share

Building Distributed Systems With Riak and Riak Core

  • 13,232 views
Uploaded on

My talk from DevNationSF 2010

My talk from DevNationSF 2010

More in: Technology
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
No Downloads

Views

Total Views
13,232
On Slideshare
12,877
From Embeds
355
Number of Embeds
16

Actions

Shares
Downloads
195
Comments
0
Likes
16

Embeds 355

http://blog.basho.com 119
http://sharpmartin.com 85
http://basho.com 77
http://godwincaruana.me 42
http://www.linkedin.com 14
https://www.linkedin.com 4
http://irr.posterous.com 4
http://www.slideshare.net 2
http://basho.co.jp 1
http://www.slashdocs.com 1
https://si0.twimg.com 1
http://webcache.googleusercontent.com 1
url_unknown 1
http://feeds.feedburner.com 1
http://twitter.com 1
http://offers.s0nic.net 1

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. Building Distributed Systems With Riak Core Andy Gross (@argv0) VP Engineering Basho DevNation SF 2010
  • 2. Riak K/V • Distributed Key-Value Store • Based on Amazon’s Dynamo • HTTP and Binary (Protocol Buffers) APIs • Data access by {Bucket, Key} • Javascript Map/Reduce • Link Walking • Pluggable Storage (Bitcask, InnoDB, ...)
  • 3. High-Level Dynamo • Decentralized (no “master” nodes) • Homogeneous (all nodes can do anything) • Vector clocks (no reliance on physical time) • Gossip Protocol (no global state) • Consistent Hashing for replica placement (a local calculation for each node)
  • 4. N, R, W Values • N = number of replicas to store (on distinct nodes) • R = number of replica responses needed for a successful read (specified per-request) • W = number of replica responses needed for a successful write (specified per- request)
  • 5. Harvesting A Framework • We noticed that Riak code fell into one of two categories • Code specific to K/V storage • “generic” distributed systems code • So we split Riak into K/V and Core
  • 6. Distributed Coordination • Making many machines act like one • Division of labor • Load balancing • State storage • Mutual exclusion/locking
  • 7. Riak Core Applications Your App Riak K/V Riak Core
  • 8. Riak Core Applications Your App Your App Riak K/V Riak Core
  • 9. Riak Core Abstractions • Virtual Nodes • Preference Lists • Ring Event Watchers • Node Event Watchers
  • 10. Virtual Nodes • Primary actor in a Dynamo-based system • Handles load for (1/num_partitions) • Implements commands dispatched from clients • Handles handoff when nodes join/leave
  • 11. Preference Lists • Lists of virtual nodes obtained by hashing a request (document, sessionid, etc). • Allows any node to compute document locations • Central to replication in Riak • Down nodes are filtered out, replaced with next-best nodes in the ring.
  • 12. Ring Event Watchers • Notified when ring state changes due to node addition/removal • API: ring_update(NewRing) • Can modify ring state in an app-specific fashion
  • 13. Node Event Watchers • Nodes run and advertise “services” • API: service_update(Services) • Active service list used to generate per-app preference lists.
  • 14. Use cases • If distributed systems isn’t your core business, outsource it! • Providing a distribution layer on top of non-distributed systems like: • Couch, Redis, Memcached • Implementing your own systems.
  • 15. Current Status and Roadmap • Erlang-only now, but not for long (HTTP and PB APIs coming) • Some harvesting left to do (versioned objects, ring/node handler utilities) • Project templates - skeleton code for writing Riak Core-based systems. • Stronger consistency models (with a Paxos/ ZAB-like protocol)
  • 16. Thanks! • http://wiki.basho.com • http://github.com/basho • http://twitter.com/basho/team • irc://freenode.net/#riak • Riak SF Meetup (on meetup.com) • Visit us! 795 Folsom @ 4th (Twitter Bldg.)