Lecture 7: DHT and P2P Games

3,020 views
2,838 views

Published on

Published in: Technology, Business
0 Comments
7 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
3,020
On SlideShare
0
From Embeds
0
Number of Embeds
15
Actions
Shares
0
Downloads
0
Comments
0
Likes
7
Embeds 0
No embeds

No notes for slide

Lecture 7: DHT and P2P Games

  1. 1. DHT-based P2P Architecture 1
  2. 2. No server to store game states 2
  3. 3. A client can store its own player’s state but what about states of NPC, objects etc. ? 3
  4. 4. Not scalable if we replicate states of every object in the game in every client 4
  5. 5. Idea: split responsibility of storing the states among the clients 5
  6. 6. Who store what? 6
  7. 7. To find out who store what, each client can maintain a table .. Object Client 1 X 2 Y 3 Z : : : : : : 7
  8. 8. Need to update table at every node frequently. Still not scalable. 8
  9. 9. Idea: split responsibility of storing the table among the clients 9
  10. 10. DHT: Distributed Hash Table 10
  11. 11. Hash Table insert (key, object) delete (key) obj = lookup (key) 11
  12. 12. Distributed Hash Table insert (key, object) delete (key) obj = lookup (key) 12
  13. 13. DHT: Objects can be stored in any node in the network. 13
  14. 14. Example: Given a torrent file, find the list of peers seeding or downloading the file. 14
  15. 15. How to do this in a fully distributed manner? 15
  16. 16. Idea: Have a set of established rules to decide which key (object) is stored in which node. 16
  17. 17. Rule: Assign IDs to nodes and objects. An object is stored in the node with closest ID. 17
  18. 18. How to assign ID? Given a object, how to find the closest node? 18
  19. 19. Pastry A Distributed Hash Table 19
  20. 20. How to assign ID? 20
  21. 21. To assign ID, we can use a hash (e.g. into a 128 bit string). e.g., hash IP address, URL, name etc. 21
  22. 22. An ID is of the form d1 d2 d3 ... dm with digit di = {0, 1, 2, ... n-1} 22
  23. 23. Example IDs (n = 10, m = 4) 0514 2736 4090 23
  24. 24. Example IDs (n = 3, m = 4) 1210 1102 2011 24
  25. 25. 2222 0000 25
  26. 26. 2222 0000 26
  27. 27. 2222 0000 A Any object whose IDs that falls within the blue region is stored in node A. 27
  28. 28. Given an object ID, how to find the closest node? 28
  29. 29. Each node only knows a small, constant number of nodes, in a routing table. 29
  30. 30. Routing Table for Node 1201 0121 137.12.1.0 2001 22.31.90.9 1021 45.24.8.233 1121 : 1210 : 1222 : 1200 : - - 30
  31. 31. A node knows (m)x(n-1) neighbors -- m groups, each group with n-1 entries. 31
  32. 32. Each node i keeps a table next(k,d) = address of node j such that 1. i and j share prefix of length k 2. (k+1)-th digit of j is d 3. node j is the “physically closest” match 32
  33. 33. next(0,0) 0121 137.12.1.0 next(0,2) 2001 22.31.90.9 next(1,0) 1021 45.24.8.233 next(1,1) 1121 : next(2,1) 1210 : next(2,2) 1222 : next(3,0) 1200 : next(3,2) - - 33
  34. 34. In addition, each node knows L other nodes with closest ID. (L/2 above, L/2 below) 34
  35. 35. Leaf Sets 35
  36. 36. Example Routing Table 2101 0121 1201 1021 1121 36
  37. 37. Recall that we want to find the node with ID closest to the ID of a given object. node = route(object_id) 37
  38. 38. route(0212) issued at node 1211. 1211 forwards the request to next(0,0). 1211 0100 38
  39. 39. route(0212) received at 0100. 0100 forwards the request to next(1,2). 1211 0201 0100 39
  40. 40. route(0212) received at 0201. 0201 forward the request to next(2,1). 1211 0210 0201 0100 40
  41. 41. 0201 found that it is within the range of its leaf set, and forward it to the closest node. 0211 1211 0210 0201 0100 41
  42. 42. After 4 lookups, we found the node closest to 0212 is 0211. 42
  43. 43. Results of route() can be cached to avoid frequent lookup 43
  44. 44. We can now implement the following using route() insert (key, object) delete (key) obj = lookup (key) 44
  45. 45. Joining Procedure 45
  46. 46. Suppose node 0212 wants to join. It finds a node (e.g., 1211) to run route(0212) 0211 1211 0210 0201 0100 46
  47. 47. Routing tables entries are copied from nodes encountered a long the way. 0211 1211 0210 0201 0100 47
  48. 48. Leaf sets are initialized using the leaf sets of route(0212) 0211 1211 0210 0201 0100 48
  49. 49. Leaf Sets 49
  50. 50. Scribe Application-Level Multicast over Pastry 50
  51. 51. Recall: We use multicast to implement interest management 51
  52. 52. In application-level multicast, nodes forward messages to each other 52
  53. 53. who should forward to who? 53
  54. 54. Idea: Use Pastry’s routing table to construct the tree. 54
  55. 55. A group is assigned an ID, e.g., 0212 0210 0200 0221 0110 0001 2101 55
  56. 56. Node with ID closest to the group becomes the rendezvous point 0210 0200 0221 0110 0001 2101 56
  57. 57. 1200 join the group by routing a join message to group ID. 0210 1200 0200 0221 0110 0001 2101 57
  58. 58. The message stops once it reaches a node already on the multicast tree for group 0211 0210 1200 0200 join 0211 0221 0110 0001 2101 58
  59. 59. The path traversed by this join message becomes part of the multicast tree. 0210 1200 0200 0221 0110 0001 2101 59
  60. 60. To multicast a message, send to rendezvous point, which is then disseminated along the tree. 0210 1200 0200 0221 0110 0001 2101 60
  61. 61. Who store what? 61
  62. 62. Knutsson’s Idea: divide game world into regions and assign a region coordinator to keep the states in each region. mana=9, life=3 mana=5, life=1 : 62
  63. 63. When a player needs to read/write the state of an object, it contacts the coordinator. player X’s mana=10 63
  64. 64. Hash regions and nodes into the same ID space. The node whose ID is closest to the ID of a region becomes the coordinator. Game Map DHT ID space 64
  65. 65. The coordinator is likely to be not from the same region it is coordinating, reducing the possibility of cheating. Game Map DHT ID space 65
  66. 66. Once the update message reaches the coordinator, the coordinator informs all subscribers to the region through a multicast tree using Scribe. nodes interested in region edges in multicast tree 66
  67. 67. What if coordinator fails? 67
  68. 68. Route to Region 1111 2101 1201 1121 1110 68
  69. 69. If the coordinator fails, Pastry would route the messages to the next closest node. 2101 1201 x 1121 1112 69
  70. 70. Use the next closest node to the region as the backup coordinator. Game Map DHT ID space backup for 70
  71. 71. The primary coordinator knows the backup from its leaf set and replicates the states to the backup coordinator. 71
  72. 72. If the backup receives messages for a region, it knows that the primary has failed and takes over the responsibility. 72
  73. 73. Issues with Knutsson’s scheme 73
  74. 74. 1. No defense against cheating 74
  75. 75. 2. Large latency when (i) look for objects in a region (ii) creating new objects (iii) update state of objects 75
  76. 76. 3. Extra load on coordinators 76
  77. 77. 4. Frequent changes of coordinators for fast moving players. 77
  78. 78. Knutsson’s design is for MMORPG game (slow pace, tolerate latency) 78
  79. 79. Can similar architecture be used for FPS games? 79
  80. 80. Can we reduce the latency? 80
  81. 81. 1. Caching to prevent frequent lookup 81
  82. 82. 2. Prefetching objects that is near AoI to reduce delay 82
  83. 83. 3. Don’t use Scribe. Use direct connection as in VON 83
  84. 84. Recap 84
  85. 85. Without a trusted central server: 1. how to order events? 2. how to prevent cheat? 3. how to do interest management? 4. who should store the states? 85
  86. 86. Many interesting proposals, but no perfect solution. 1. Increase message overhead 2. Increase latency 3. No conflict resolution 4. Cheating 5. Robustness is hard 86
  87. 87. Many tricks we learnt from pure P2P architecture is useful if we have a cluster of servers for games “P2P among servers” 87
  88. 88. Part III of CS4344 Hybrid Architecture 88

×