Your SlideShare is downloading. ×
0
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Riak Tutorial (Øredev)
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Riak Tutorial (Øredev)

5,093

Published on

Detailed discussion and exercises for understanding Riak, from Øredev 2010

Detailed discussion and exercises for understanding Riak, from Øredev 2010

0 Comments
19 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
5,093
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
0
Comments
0
Likes
19
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide








  • Think of it like a big hash-table
  • Think of it like a big hash-table
  • Think of it like a big hash-table
  • Think of it like a big hash-table
  • Think of it like a big hash-table
  • Think of it like a big hash-table
  • X = throughput, compute power for MapReduce, storage, lower latency
  • X = throughput, compute power for MapReduce, storage, lower latency
  • X = throughput, compute power for MapReduce, storage, lower latency














  • Consistent hashing means:
    1) large, fixed-size key-space
    2) no rehashing of keys - always hash the same way
  • Consistent hashing means:
    1) large, fixed-size key-space
    2) no rehashing of keys - always hash the same way
  • Consistent hashing means:
    1) large, fixed-size key-space
    2) no rehashing of keys - always hash the same way
  • Consistent hashing means:
    1) large, fixed-size key-space
    2) no rehashing of keys - always hash the same way
  • Consistent hashing means:
    1) large, fixed-size key-space
    2) no rehashing of keys - always hash the same way

































































  • 1) Client requests a key
    2) Get handler starts up to service the request
    3) Hashes key to its owner partitions (N=3)
    4) Sends similar “get” request to those partitions
    5) Waits for R replies that concur (R=2)
    6) Resolves the object, replies to client
    7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
  • 1) Client requests a key
    2) Get handler starts up to service the request
    3) Hashes key to its owner partitions (N=3)
    4) Sends similar “get” request to those partitions
    5) Waits for R replies that concur (R=2)
    6) Resolves the object, replies to client
    7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
  • 1) Client requests a key
    2) Get handler starts up to service the request
    3) Hashes key to its owner partitions (N=3)
    4) Sends similar “get” request to those partitions
    5) Waits for R replies that concur (R=2)
    6) Resolves the object, replies to client
    7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
  • 1) Client requests a key
    2) Get handler starts up to service the request
    3) Hashes key to its owner partitions (N=3)
    4) Sends similar “get” request to those partitions
    5) Waits for R replies that concur (R=2)
    6) Resolves the object, replies to client
    7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
  • 1) Client requests a key
    2) Get handler starts up to service the request
    3) Hashes key to its owner partitions (N=3)
    4) Sends similar “get” request to those partitions
    5) Waits for R replies that concur (R=2)
    6) Resolves the object, replies to client
    7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
  • 1) Client requests a key
    2) Get handler starts up to service the request
    3) Hashes key to its owner partitions (N=3)
    4) Sends similar “get” request to those partitions
    5) Waits for R replies that concur (R=2)
    6) Resolves the object, replies to client
    7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
  • 1) Client requests a key
    2) Get handler starts up to service the request
    3) Hashes key to its owner partitions (N=3)
    4) Sends similar “get” request to those partitions
    5) Waits for R replies that concur (R=2)
    6) Resolves the object, replies to client
    7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
  • 1) Client requests a key
    2) Get handler starts up to service the request
    3) Hashes key to its owner partitions (N=3)
    4) Sends similar “get” request to those partitions
    5) Waits for R replies that concur (R=2)
    6) Resolves the object, replies to client
    7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
  • 1) Client requests a key
    2) Get handler starts up to service the request
    3) Hashes key to its owner partitions (N=3)
    4) Sends similar “get” request to those partitions
    5) Waits for R replies that concur (R=2)
    6) Resolves the object, replies to client
    7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
  • 1) Client requests a key
    2) Get handler starts up to service the request
    3) Hashes key to its owner partitions (N=3)
    4) Sends similar “get” request to those partitions
    5) Waits for R replies that concur (R=2)
    6) Resolves the object, replies to client
    7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
  • 1) Client requests a key
    2) Get handler starts up to service the request
    3) Hashes key to its owner partitions (N=3)
    4) Sends similar “get” request to those partitions
    5) Waits for R replies that concur (R=2)
    6) Resolves the object, replies to client
    7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
  • 1) Client requests a key
    2) Get handler starts up to service the request
    3) Hashes key to its owner partitions (N=3)
    4) Sends similar “get” request to those partitions
    5) Waits for R replies that concur (R=2)
    6) Resolves the object, replies to client
    7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
  • 1) Client requests a key
    2) Get handler starts up to service the request
    3) Hashes key to its owner partitions (N=3)
    4) Sends similar “get” request to those partitions
    5) Waits for R replies that concur (R=2)
    6) Resolves the object, replies to client
    7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
  • 1) Client requests a key
    2) Get handler starts up to service the request
    3) Hashes key to its owner partitions (N=3)
    4) Sends similar “get” request to those partitions
    5) Waits for R replies that concur (R=2)
    6) Resolves the object, replies to client
    7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
  • 1) Client requests a key
    2) Get handler starts up to service the request
    3) Hashes key to its owner partitions (N=3)
    4) Sends similar “get” request to those partitions
    5) Waits for R replies that concur (R=2)
    6) Resolves the object, replies to client
    7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
  • 1) Client requests a key
    2) Get handler starts up to service the request
    3) Hashes key to its owner partitions (N=3)
    4) Sends similar “get” request to those partitions
    5) Waits for R replies that concur (R=2)
    6) Resolves the object, replies to client
    7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
  • 1) Client requests a key
    2) Get handler starts up to service the request
    3) Hashes key to its owner partitions (N=3)
    4) Sends similar “get” request to those partitions
    5) Waits for R replies that concur (R=2)
    6) Resolves the object, replies to client
    7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
  • 1) Client requests a key
    2) Get handler starts up to service the request
    3) Hashes key to its owner partitions (N=3)
    4) Sends similar “get” request to those partitions
    5) Waits for R replies that concur (R=2)
    6) Resolves the object, replies to client
    7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
  • 1) Client requests a key
    2) Get handler starts up to service the request
    3) Hashes key to its owner partitions (N=3)
    4) Sends similar “get” request to those partitions
    5) Waits for R replies that concur (R=2)
    6) Resolves the object, replies to client
    7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
  • 1) Client requests a key
    2) Get handler starts up to service the request
    3) Hashes key to its owner partitions (N=3)
    4) Sends similar “get” request to those partitions
    5) Waits for R replies that concur (R=2)
    6) Resolves the object, replies to client
    7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
  • 1) Client requests a key
    2) Get handler starts up to service the request
    3) Hashes key to its owner partitions (N=3)
    4) Sends similar “get” request to those partitions
    5) Waits for R replies that concur (R=2)
    6) Resolves the object, replies to client
    7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
  • 1) Client requests a key
    2) Get handler starts up to service the request
    3) Hashes key to its owner partitions (N=3)
    4) Sends similar “get” request to those partitions
    5) Waits for R replies that concur (R=2)
    6) Resolves the object, replies to client
    7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
  • 1) Client requests a key
    2) Get handler starts up to service the request
    3) Hashes key to its owner partitions (N=3)
    4) Sends similar “get” request to those partitions
    5) Waits for R replies that concur (R=2)
    6) Resolves the object, replies to client
    7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
  • 1) Client requests a key
    2) Get handler starts up to service the request
    3) Hashes key to its owner partitions (N=3)
    4) Sends similar “get” request to those partitions
    5) Waits for R replies that concur (R=2)
    6) Resolves the object, replies to client
    7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
  • 1) Client requests a key
    2) Get handler starts up to service the request
    3) Hashes key to its owner partitions (N=3)
    4) Sends similar “get” request to those partitions
    5) Waits for R replies that concur (R=2)
    6) Resolves the object, replies to client
    7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
  • 1) Client requests a key
    2) Get handler starts up to service the request
    3) Hashes key to its owner partitions (N=3)
    4) Sends similar “get” request to those partitions
    5) Waits for R replies that concur (R=2)
    6) Resolves the object, replies to client
    7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated
  • 1) Client requests a key
    2) Get handler starts up to service the request
    3) Hashes key to its owner partitions (N=3)
    4) Sends similar “get” request to those partitions
    5) Waits for R replies that concur (R=2)
    6) Resolves the object, replies to client
    7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated





























  • *** make sure to talk about LWW, and commit hooks -- tell them to ignore the vclock business ***





















  • “Quorums”? When I say “quora” I mean the constraints (or lack thereof) your application puts on request consistency.
  • Remember that requests contact all participant partitions/vnodes. No computer system is 100% reliable, so there will be times when increased latency or hardware failure will make a node unavailable. By unavailable, I mean requests timeout, the network partitions, or there’s an actual physical outage.
    FT = fault-tolerance, C = consistency
    Strong consistency (as opposed to strict) means that the participants in each read or write quorum overlap. The typical example is N=3, R=2, W=2. In all successful read requests, at least one of the read partitions will be one that accepted the latest write.
  • Remember that requests contact all participant partitions/vnodes. No computer system is 100% reliable, so there will be times when increased latency or hardware failure will make a node unavailable. By unavailable, I mean requests timeout, the network partitions, or there’s an actual physical outage.
    FT = fault-tolerance, C = consistency
    Strong consistency (as opposed to strict) means that the participants in each read or write quorum overlap. The typical example is N=3, R=2, W=2. In all successful read requests, at least one of the read partitions will be one that accepted the latest write.
  • Remember that requests contact all participant partitions/vnodes. No computer system is 100% reliable, so there will be times when increased latency or hardware failure will make a node unavailable. By unavailable, I mean requests timeout, the network partitions, or there’s an actual physical outage.
    FT = fault-tolerance, C = consistency
    Strong consistency (as opposed to strict) means that the participants in each read or write quorum overlap. The typical example is N=3, R=2, W=2. In all successful read requests, at least one of the read partitions will be one that accepted the latest write.
  • Remember that requests contact all participant partitions/vnodes. No computer system is 100% reliable, so there will be times when increased latency or hardware failure will make a node unavailable. By unavailable, I mean requests timeout, the network partitions, or there’s an actual physical outage.
    FT = fault-tolerance, C = consistency
    Strong consistency (as opposed to strict) means that the participants in each read or write quorum overlap. The typical example is N=3, R=2, W=2. In all successful read requests, at least one of the read partitions will be one that accepted the latest write.
  • However, writes are a little more complicated to track than reads.
    When there’s a detectable node outage/partition, writes will be sent to fallbacks (hinted handoff), which means that Riak is HIGHLY write-available.
    Also, there’s an implied R quorum because the internal Erlang client has to fetch the object to update it and the vclock.
  • However, writes are a little more complicated to track than reads.
    When there’s a detectable node outage/partition, writes will be sent to fallbacks (hinted handoff), which means that Riak is HIGHLY write-available.
    Also, there’s an implied R quorum because the internal Erlang client has to fetch the object to update it and the vclock.
  • However, writes are a little more complicated to track than reads.
    When there’s a detectable node outage/partition, writes will be sent to fallbacks (hinted handoff), which means that Riak is HIGHLY write-available.
    Also, there’s an implied R quorum because the internal Erlang client has to fetch the object to update it and the vclock.
  • However, writes are a little more complicated to track than reads.
    When there’s a detectable node outage/partition, writes will be sent to fallbacks (hinted handoff), which means that Riak is HIGHLY write-available.
    Also, there’s an implied R quorum because the internal Erlang client has to fetch the object to update it and the vclock.
  • Why don’t we outright reclaim the space? Ordering is hard to determine since deletes require no vclock. We prefer not lose data when there is an issue of contention.
  • Why don’t we outright reclaim the space? Ordering is hard to determine since deletes require no vclock. We prefer not lose data when there is an issue of contention.
  • Why don’t we outright reclaim the space? Ordering is hard to determine since deletes require no vclock. We prefer not lose data when there is an issue of contention.













































































  • This is probably one of the easiest Map-Reduce queries/jobs you can submit. It simply returns the values of all the keys in the bucket, including their bucket/key/vclock and metadata.
  • Instead of specifying the function inline, you can also store it under a bucket/key, and have Riak retrieve and execute it automatically.
  • A query that makes use of the “arg” in the map phase, named functions, and a reduce phase.

    Finally here’s how you can submit all queries. Use the @- to signify that your data will come on the next line and be terminated by Ctrl-D.
  • A query that makes use of the “arg” in the map phase, named functions, and a reduce phase.

    Finally here’s how you can submit all queries. Use the @- to signify that your data will come on the next line and be terminated by Ctrl-D.































  • ×