<ul><li>In Production: </li></ul><ul><li>Portrait of a Successful Failure </li></ul><ul><ul><ul><ul><li>Sean Cribbs @seanc...
Riak is... <ul><li>a scalable, </li></ul><ul><li>highly-available, </li></ul><ul><li>networked </li></ul><ul><li>key/value...
Riak Data Model <ul><li>Riak stores values against keys </li></ul><ul><li>Encode your data how you like it </li></ul><ul><...
Basic Operations <ul><li>GET /buckets/B/keys/K </li></ul><ul><li>PUT /buckets/B/keys/K </li></ul><ul><li>DELETE /buckets/B...
Extras <ul><li>MapReduce, Link-walking </li></ul><ul><li>Value Metadata </li></ul><ul><li>Secondary Indexes </li></ul><ul>...
When things go wrong <ul><li>A Real Customer Story </li></ul>
Situation <ul><li>You have cluster </li></ul><ul><li>Things are great </li></ul><ul><li>It’s time to add capacity </li></ul>
Solution <ul><li>Add a new node </li></ul>
Hostnames <ul><li>This customer named nodes after drinks: </li></ul>Aston IPA Highball Gin Framboise ESB
riak-admin join <ul><li>With Riak, it’s easy to add a new node. </li></ul><ul><li>on aston: $ riak-admin join  [email_addr...
This can’t be good...
Quick, what do you do? <ul><li>add another system! </li></ul><ul><li>shutdown the entire site! </li></ul><ul><li>alert Bas...
Control the situation <ul><li>Stop the handoff between nodes </li></ul><ul><li>on every node we: </li></ul><ul><li>riak at...
Monitor
...for signs of...
Stabilization
Now what? <ul><li>What happened? </li></ul><ul><li>Why did it happen? </li></ul><ul><li>Can we fix this situation? </li></ul>
But first <ul><li>Are you still operational? </li></ul><ul><ul><li>yes </li></ul></ul><ul><li>Any noticeable changes in se...
So what happened?! <ul><li>New node added </li></ul><ul><li>Ring must rebalance </li></ul><ul><li>Nodes claim partitions <...
Member Status First let’s peek under the hood. $ riak-admin member_status ================================= Membership ===...
Relief <ul><li>Let’s try to relieve the pressure a bit </li></ul><ul><li>Focus on the node with the least disk space left....
Relief <ul><li>It took 20 minutes to transfer the vnode </li></ul><ul><li>(riak@gin)7> 19:34:00.574 [info] Starting handof...
Relief <ul><li>And the vnode had arrived at Aston from Gin </li></ul><ul><li>aston:/data/riak/bitcask/20552366774965822287...
Eureka! <ul><li>Data was not being cleaned up after handoff. </li></ul><ul><li>This would eventually eat all disk space! <...
What’s the solution? <ul><li>We already had a bugfix for the next release (1.0.2) that detects the problem </li></ul><ul><...
Hot Patch We patched their live, production system  while still under load. (on all nodes)  riak attachl(riak_kv_bitcask_b...
Bingo! And the new code did what we expected. {ok, R} = riak_core_ring_manager:get_my_ring().[riak_core_vnode_master:get_v...
Manual Cleanup So we backed up those vnodes with unused data on Gin to another system and manually removed them. gin:/data...
Gin’s Status Improves
Bedtime <ul><li>It was late at night, things were stable and the customer’s users were unaffected. </li></ul><ul><li>We al...
Next Day’s Plan <ul><li>Start up handoff on the node with the lowest disk space </li></ul><ul><ul><ul><li>let it move data...
Let’s Get Started <ul><li>On Gin only: reset to defaults, re-enable handoffs </li></ul><ul><li>on gin: </li></ul><ul><li>a...
Gin Moves Data to IPA
Highball’s Turn Highball was next lowest now that Gin was handing data off, time to restart it too. on highball applicatio...
Rebalance Starts
and keeps going...
and going...
and going...
Rebalanced
Minimal Impact <ul><li>6ms variance for 99th % (32ms to 38ms) </li></ul><ul><li>0.68s variance for 100th % (0.12s to 0.8s)...
Moral of the Story <ul><li>Riak’s resilience under stress resulted in minimal operational impact </li></ul><ul><li>Hot cod...
Things break, Riak  bends . .
Thank You <ul><li>http://basho.com/resources/downloads/ </li></ul><ul><li>https://github.com/basho/riak/ </li></ul><ul><li...
Upcoming SlideShare
Loading in...5
×

Riak a successful failure

15,002

Published on

Published in: Technology
0 Comments
7 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
15,002
On Slideshare
0
From Embeds
0
Number of Embeds
14
Actions
Shares
0
Downloads
69
Comments
0
Likes
7
Embeds 0
No embeds

No notes for slide
  • Thank you to Greg Burd for most of these slides. He was going to give the presentation, but did not feel well enough to be here tonight.
  • Gin had not removed that vnode’s data directory after sending it to Aston.We had confirmation, data was not being removed after transfers finished.This would have eventually eaten all space on all nodes and halted the cluster.
  • We already had a solution ready for 1.0.2 which would properly identify any orphaned vnodes, why not simply use that?So we tested that on our laptops, creating a close approximation of the customer’s environment.
  • At this point it was late at night, the cluster was servicing requests as always and customers had no idea anything was wrong.We all went to bed.And didn’t reconvene for 12 hours.
  • On Gin only, we reset things we’d changed to default values and then re-enabled handoffs.
  • Transcript of "Riak a successful failure"

    1. 1. <ul><li>In Production: </li></ul><ul><li>Portrait of a Successful Failure </li></ul><ul><ul><ul><ul><li>Sean Cribbs @seancribbs [email_address] </li></ul></ul></ul></ul>
    2. 2. Riak is... <ul><li>a scalable, </li></ul><ul><li>highly-available, </li></ul><ul><li>networked </li></ul><ul><li>key/value store. </li></ul>
    3. 3. Riak Data Model <ul><li>Riak stores values against keys </li></ul><ul><li>Encode your data how you like it </li></ul><ul><li>Keys are grouped into buckets </li></ul>
    4. 4. Basic Operations <ul><li>GET /buckets/B/keys/K </li></ul><ul><li>PUT /buckets/B/keys/K </li></ul><ul><li>DELETE /buckets/B/keys/K </li></ul>
    5. 5. Extras <ul><li>MapReduce, Link-walking </li></ul><ul><li>Value Metadata </li></ul><ul><li>Secondary Indexes </li></ul><ul><li>Full-text Search </li></ul><ul><li>Configurable Storage Engines </li></ul><ul><li>Admin GUI </li></ul>
    6. 6. When things go wrong <ul><li>A Real Customer Story </li></ul>
    7. 7. Situation <ul><li>You have cluster </li></ul><ul><li>Things are great </li></ul><ul><li>It’s time to add capacity </li></ul>
    8. 8. Solution <ul><li>Add a new node </li></ul>
    9. 9. Hostnames <ul><li>This customer named nodes after drinks: </li></ul>Aston IPA Highball Gin Framboise ESB
    10. 10. riak-admin join <ul><li>With Riak, it’s easy to add a new node. </li></ul><ul><li>on aston: $ riak-admin join [email_address] </li></ul><ul><li>Then you leave for a quick lunch. </li></ul>
    11. 11. This can’t be good...
    12. 12. Quick, what do you do? <ul><li>add another system! </li></ul><ul><li>shutdown the entire site! </li></ul><ul><li>alert Basho Support via an URGENT ticket </li></ul>
    13. 13. Control the situation <ul><li>Stop the handoff between nodes </li></ul><ul><li>on every node we: </li></ul><ul><li>riak attach </li></ul><ul><li>application:set_env(riak_core, handoff_concurrency, 0). </li></ul>
    14. 14. Monitor
    15. 15. ...for signs of...
    16. 16. Stabilization
    17. 17. Now what? <ul><li>What happened? </li></ul><ul><li>Why did it happen? </li></ul><ul><li>Can we fix this situation? </li></ul>
    18. 18. But first <ul><li>Are you still operational? </li></ul><ul><ul><li>yes </li></ul></ul><ul><li>Any noticeable changes in service latency? </li></ul><ul><ul><li>no </li></ul></ul><ul><li>Have any nodes failed? </li></ul><ul><ul><li>no , the cluster is still servicing requests . </li></ul></ul>
    19. 19. So what happened?! <ul><li>New node added </li></ul><ul><li>Ring must rebalance </li></ul><ul><li>Nodes claim partitions </li></ul><ul><li>Handoff of data begins </li></ul><ul><li>Disks fill up </li></ul>
    20. 20. Member Status First let’s peek under the hood. $ riak-admin member_status ================================= Membership ================================Status Ring Pending Node-----------------------------------------------------------------------------valid 4.3% 16.8% riak@astonvalid 18.8% 16.8% riak@esbvalid 19.1% 16.8% riak@framboisevalid 19.5% 16.8% riak@ginvalid 19.1% 16.4% riak@highballvalid 19.1% 16.4% riak@ipa-----------------------------------------------------------------------------Valid:6 / Leaving:0 / Exiting:0 / Joining:0 / Down:0
    21. 21. Relief <ul><li>Let’s try to relieve the pressure a bit </li></ul><ul><li>Focus on the node with the least disk space left. </li></ul><ul><li>gin:~$ riak attach </li></ul><ul><li>application:set_env(riak_core, forced_ownership_handoff, 0). </li></ul><ul><li>application:set_env(riak_core, vnode_inactivity_timeout, 300000). </li></ul><ul><li>application:set_env(riak_core, handoff_concurrency, 1). </li></ul><ul><li>riak_core_vnode:trigger_handoff(element(2, riak_core_vnode_master:get_vnode_pid(411047335499316445744786359201454599278231027712, riak_kv_vnode))). </li></ul>
    22. 22. Relief <ul><li>It took 20 minutes to transfer the vnode </li></ul><ul><li>(riak@gin)7> 19:34:00.574 [info] Starting handoff of partition riak_kv_vnode 411047335499316445744786359201454599278231027712 from riak@gin to riak@aston </li></ul><ul><li>gin:~$ sudo netstat -nap | fgrep 10.36.18.245 </li></ul><ul><li>tcp 0 1065 10.36.110.79:40532 10.36.18.245:8099 ESTABLISHED 27124/beam.smp </li></ul><ul><li>tcp 0 0 10.36.110.79:46345 10.36.18.245:53664 ESTABLISHED 27124/beam.smp </li></ul><ul><li>(riak@gin)7> 19:54:56.721 [info] Handoff of partition riak_kv_vnode 411047335499316445744786359201454599278231027712 from riak@gin to riak@aston </li></ul><ul><li>completed: sent 3805730 objects in 1256.14 seconds </li></ul>
    23. 23. Relief <ul><li>And the vnode had arrived at Aston from Gin </li></ul><ul><li>aston:/data/riak/bitcask/205523667749658222872393179600727299639115513856-132148847970820$ ls -la </li></ul><ul><li>total 7305344 </li></ul><ul><li>drwxr-xr-x 2 riak riak 4096 2011-11-11 18:05 . </li></ul><ul><li>drwxr-xr-x 258 riak riak 36864 2011-11-11 18:56 .. </li></ul><ul><li>-rw------- 1 riak riak 2147479761 2011-11-11 17:53 1321055508.bitcask.data </li></ul><ul><li>-rw-r--r-- 1 riak riak 86614226 2011-11-11 17:53 1321055508.bitcask.hint </li></ul><ul><li>-rw------- 1 riak riak 1120382399 2011-11-11 19:50 1321055611.bitcask.data </li></ul><ul><li>-rw-r--r-- 1 riak riak 55333675 2011-11-11 19:50 1321055611.bitcask.hint </li></ul><ul><li>-rw------- 1 riak riak 2035568266 2011-11-11 18:03 1321056070.bitcask.data </li></ul><ul><li>-rw-r--r-- 1 riak riak 99390277 2011-11-11 18:03 1321056070.bitcask.hint </li></ul><ul><li>-rw------- 1 riak riak 1879298219 2011-11-11 18:05 1321056214.bitcask.data </li></ul><ul><li>-rw-r--r-- 1 riak riak 56509595 2011-11-11 18:05 1321056214.bitcask.hint </li></ul><ul><li>-rw------- 1 riak riak 119 2011-11-11 17:53 bitcask.write.lock </li></ul>
    24. 24. Eureka! <ul><li>Data was not being cleaned up after handoff. </li></ul><ul><li>This would eventually eat all disk space! </li></ul>
    25. 25. What’s the solution? <ul><li>We already had a bugfix for the next release (1.0.2) that detects the problem </li></ul><ul><li>Tested the bugfix locally before delivering to customer </li></ul>
    26. 26. Hot Patch We patched their live, production system while still under load. (on all nodes) riak attachl(riak_kv_bitcask_backend).m(riak_kv_bitcask_backend).Module riak_kv_bitcask_backend compiled: Date: November 12 2011, Time: 04.18Compiler options: [{outdir,&quot;ebin&quot;}, debug_info,warnings_as_errors, {parse_transform,lager_transform}, {i,&quot;include&quot;}]Object file: /usr/lib/riak/lib/riak_kv-1.0.1/ebin/riak_kv_bitcask_backend.beamExports: api_version/0 is_empty/1callback/3 key_counts/0delete/4 key_counts/1drop/1 module_info/0fold_buckets/4 module_info/1fold_keys/4 put/5fold_objects/4 start/2get/3 status/1...
    27. 27. Bingo! And the new code did what we expected. {ok, R} = riak_core_ring_manager:get_my_ring().[riak_core_vnode_master:get_vnode_pid(Partition, riak_kv_vnode) || {Partition,_} <- riak_core_ring:all_owners(R)].(riak@gin)19> [riak_core_vnode_master:get_vnode_pid(Partition, riak_kv_vnode) || {Partition,_} <- riak_core_ring:all_owners(R)].22:48:07.423 [notice] Unused data directories exist for partition &quot;11417981541647679048466287755595961091061972992 &quot;: &quot;/data/riak/bitcask/11417981541647679048466287755595961091061972992&quot;22:48:07.785 [notice] Unused data directories exist for partition &quot;582317058624031631471780675535394015644160622592 &quot;: &quot;/data/riak/bitcask/582317058624031631471780675535394015644160622592&quot;22:48:07.829 [notice] Unused data directories exist for partition &quot;782131735602866014819940711258323334737745149952 &quot;: &quot;/data/riak/bitcask/782131735602866014819940711258323334737745149952&quot;[{ok,<0.30093.11>},...
    28. 28. Manual Cleanup So we backed up those vnodes with unused data on Gin to another system and manually removed them. gin:/data/riak/bitcask$ ls manual_cleanup/ 11417981541647679048466287755595961091061972992 782131735602866014819940711258323334737745149952582317058624031631471780675535394015644160622592 gin:/data/riak/bitcask$ rm -rf manual_cleanup
    29. 29. Gin’s Status Improves
    30. 30. Bedtime <ul><li>It was late at night, things were stable and the customer’s users were unaffected. </li></ul><ul><li>We all went to bed, and didn’t reconvene for 12 hours. </li></ul>
    31. 31. Next Day’s Plan <ul><li>Start up handoff on the node with the lowest disk space </li></ul><ul><ul><ul><li>let it move data 1 partition at a time to other nodes </li></ul></ul></ul><ul><ul><ul><li>observe that data directories were removed after successful transfers complete </li></ul></ul></ul><ul><li>When disk space frees up a bit, start up other nodes, increase handoff concurrency, watch the ring rebalance. </li></ul>
    32. 32. Let’s Get Started <ul><li>On Gin only: reset to defaults, re-enable handoffs </li></ul><ul><li>on gin: </li></ul><ul><li>application:unset_env(riak_core, forced_ownership_handoff). </li></ul><ul><li>application:set_env(riak_core, vnode_inactivity_timeout, 60000). </li></ul><ul><li>application:set_env(riak_core, handoff_concurrency, 1). </li></ul>
    33. 33. Gin Moves Data to IPA
    34. 34. Highball’s Turn Highball was next lowest now that Gin was handing data off, time to restart it too. on highball application:unset_env(riak_core, forced_ownership_handoff).application:set_env(riak_core, vnode_inactivity_timeout, 60000).application:set_env(riak_core, handoff_concurrency, 1). on gin application:set_env(riak_core, handoff_concurrency, 4). % the default setting riak_core_vnode_manager:force_handoffs().
    35. 35. Rebalance Starts
    36. 36. and keeps going...
    37. 37. and going...
    38. 38. and going...
    39. 39. Rebalanced
    40. 40. Minimal Impact <ul><li>6ms variance for 99th % (32ms to 38ms) </li></ul><ul><li>0.68s variance for 100th % (0.12s to 0.8s) </li></ul>
    41. 41. Moral of the Story <ul><li>Riak’s resilience under stress resulted in minimal operational impact </li></ul><ul><li>Hot code-patching solved the problem in-situ , without downtime </li></ul><ul><li>We all got some sleep! </li></ul>
    42. 42. Things break, Riak bends . .
    43. 43. Thank You <ul><li>http://basho.com/resources/downloads/ </li></ul><ul><li>https://github.com/basho/riak/ </li></ul><ul><li>[email_address] </li></ul>
    1. ¿Le ha llamado la atención una diapositiva en particular?

      Recortar diapositivas es una manera útil de recopilar información importante para consultarla más tarde.

    ×