Slides used in course lecture on distributed hash tables

1,565 views
1,365 views

Published on

Slides used for course lecture(45 minutes, UG-PG mixed class) on distributed hash tables. [Try yourself] means that the students were given a chance to suggest solutions for about 1-2 minutes. The outline of the O(logN) lookup hops proof was given in the blackboard and hence, not included in the slides. Feel free to use with attribution. Please send me your feedback at harisankarh at gmail.com

Published in: Education
0 Comments
2 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
1,565
On SlideShare
0
From Embeds
0
Number of Embeds
202
Actions
Shares
0
Downloads
0
Comments
0
Likes
2
Embeds 0
No embeds

No notes for slide

Slides used in course lecture on distributed hash tables

  1. 1. Distributed Hash Tables (DHT) Harisankar H PhD student, DOS lab, Dept. of CSE, IIT Madras 11/8/2012 http://harisankarh.wordpress.com
  2. 2. Motivation• Bittorrent – Given a file id, find the list of nodes currently associated with the file • File id -> node mapping • e.g., get(“sfdsfdsf…”) -> {203.12.123.45,201.128.249.123,…}• Domain Naming System(DNS) – Find IP address of server associated with a domain name • Domain name -> IP address mapping • e.g., get(“google.co.in”) -> {209.234.67.32}
  3. 3. Abstract problem– Realize a hash table functionality in a decentralized manner • Interface – put(key,value) – get(key) -> value • Realize using nodes which can join and leave at any time .... .... [try yourself!]
  4. 4. Simple solutions• Flooding – Put() -> store in any node • Cost: O(1) – Get() -> send query to all nodes • Cost: O(N)• Full replication – Put() -> store in all nodes • Cost: O(N) – Get() -> check in any one node • Cost: O(1) [more solutions? try yourself!]
  5. 5. Partitioning in a small setting• Assign different keys to different nodes• Need a key to node mapping – getnode(key) -> node id• How to distribute the keys ? – assume that every node knows when a node join/leave the system – Assume key range: 0 to (2k – 1), k-bit key [try yourself!]
  6. 6. Consistent hashing 2k – 1 0 40 Key:51 70• Nodes assigned ids in the same space( 0 to 2k – 1)• Each node is responsible for the key range between – Its node id and the id of previous node in the id space• Responsibilities split accordingly when nodes join and leave – Responsibility of each node ≈ K/N – <k,v> pairs transferred during node join/leave ≈ K/N
  7. 7. Issues• In a large internet-scale setting – Millions of nodes – Low bandwidth• Costly to inform all the nodes when a node joins/leaves the system – O(N) messages• Problem – How to realize consistent hashing in a large internet- scale setting ? • How to implement node join/leave, key put/get ? • Assume that you know the IP address of one of the nodes which is already part of the system [try yourself!]
  8. 8. Distributed Hash Tables(e.g., Chord)• Each node(id = n) maintains list of nodes responsible for ids: (n + 2i)mod 2k, 0 <= i <= k-1
  9. 9. Key lookup• Each key lookup query is forwarded to the node in the finger table which immediately precedes it
  10. 10. Performance• Key lookup/put – O(logN) hops/messages• Node join/leave – O(logN) messages • Uses information from neighbours and periodic refreshing• O(logN) entries in the finger table [proof: try yourself!]• Scales to large number of nodes in dynamic settings – Used in bittorrent• Different types of DHTs – Pastry, Kademlia
  11. 11. Amazon Dynamo• Key-value store inspired from DHTs – Used for Amazon shopping cart • Cart id -> added items• Key features – 1 hop key lookup(O(N) neighbours per node) • Latency-sensitive application – Uses virtual nodes for handling heterogeneity and better load dispersion • Virtual nodes already proposed in Chord – Each data item replicated for availability • Versioning using vector clocks – Handles several implementation issues• Cassandra’s architecture inspired from Dynamo
  12. 12. Further research related to DHTs• Search using DHTs• Active key-value store – Incremental processing – Distributed processing• P2P Computational grid – Vishwa: DHT used for coordinator assignment and storing task-related data• Node-capability aware object placement – Virat• P2P file system – ENFS• …
  13. 13. References1. Consistent hashing 1. Karger, D. etal. (1999). "Web Caching with Consistent Hashing". Computer Networks 31 (11): 1203–1213.2. Chord 1. Ion Stoica etal. “Chord: A Scalable Peer-to-Peer Lookup Protocol for Internet Applications”. IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 11, NO. 1, FEBRUARY 20033. Dynamo 1. DeCandia etal., “Dynamo: Amazon’s Highly Available Key-value Store”, SOSP’07 Image credits: DHT figures taken from Chord[2] paper

×