• Save
Lecture 7: DHT and P2P Games
Upcoming SlideShare
Loading in...5
×
 

Lecture 7: DHT and P2P Games

on

  • 3,745 views

 

Statistics

Views

Total Views
3,745
Views on SlideShare
3,737
Embed Views
8

Actions

Likes
7
Downloads
0
Comments
0

3 Embeds 8

http://www.slideshare.net 6
http://blog.nus.edu.sg 1
http://jolie-zl-wu.blogspot.com 1

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    Lecture 7: DHT and P2P Games Lecture 7: DHT and P2P Games Presentation Transcript

    • DHT-based P2P Architecture 1
    • No server to store game states 2
    • A client can store its own player’s state but what about states of NPC, objects etc. ? 3
    • Not scalable if we replicate states of every object in the game in every client 4
    • Idea: split responsibility of storing the states among the clients 5
    • Who store what? 6
    • To find out who store what, each client can maintain a table .. Object Client 1 X 2 Y 3 Z : : : : : : 7
    • Need to update table at every node frequently. Still not scalable. 8
    • Idea: split responsibility of storing the table among the clients 9
    • DHT: Distributed Hash Table 10
    • Hash Table insert (key, object) delete (key) obj = lookup (key) 11
    • Distributed Hash Table insert (key, object) delete (key) obj = lookup (key) 12
    • DHT: Objects can be stored in any node in the network. 13
    • Example: Given a torrent file, find the list of peers seeding or downloading the file. 14
    • How to do this in a fully distributed manner? 15
    • Idea: Have a set of established rules to decide which key (object) is stored in which node. 16
    • Rule: Assign IDs to nodes and objects. An object is stored in the node with closest ID. 17
    • How to assign ID? Given a object, how to find the closest node? 18
    • Pastry A Distributed Hash Table 19
    • How to assign ID? 20
    • To assign ID, we can use a hash (e.g. into a 128 bit string). e.g., hash IP address, URL, name etc. 21
    • An ID is of the form d1 d2 d3 ... dm with digit di = {0, 1, 2, ... n-1} 22
    • Example IDs (n = 10, m = 4) 0514 2736 4090 23
    • Example IDs (n = 3, m = 4) 1210 1102 2011 24
    • 2222 0000 25
    • 2222 0000 26
    • 2222 0000 A Any object whose IDs that falls within the blue region is stored in node A. 27
    • Given an object ID, how to find the closest node? 28
    • Each node only knows a small, constant number of nodes, in a routing table. 29
    • Routing Table for Node 1201 0121 137.12.1.0 2001 22.31.90.9 1021 45.24.8.233 1121 : 1210 : 1222 : 1200 : - - 30
    • A node knows (m)x(n-1) neighbors -- m groups, each group with n-1 entries. 31
    • Each node i keeps a table next(k,d) = address of node j such that 1. i and j share prefix of length k 2. (k+1)-th digit of j is d 3. node j is the “physically closest” match 32
    • next(0,0) 0121 137.12.1.0 next(0,2) 2001 22.31.90.9 next(1,0) 1021 45.24.8.233 next(1,1) 1121 : next(2,1) 1210 : next(2,2) 1222 : next(3,0) 1200 : next(3,2) - - 33
    • In addition, each node knows L other nodes with closest ID. (L/2 above, L/2 below) 34
    • Leaf Sets 35
    • Example Routing Table 2101 0121 1201 1021 1121 36
    • Recall that we want to find the node with ID closest to the ID of a given object. node = route(object_id) 37
    • route(0212) issued at node 1211. 1211 forwards the request to next(0,0). 1211 0100 38
    • route(0212) received at 0100. 0100 forwards the request to next(1,2). 1211 0201 0100 39
    • route(0212) received at 0201. 0201 forward the request to next(2,1). 1211 0210 0201 0100 40
    • 0201 found that it is within the range of its leaf set, and forward it to the closest node. 0211 1211 0210 0201 0100 41
    • After 4 lookups, we found the node closest to 0212 is 0211. 42
    • Results of route() can be cached to avoid frequent lookup 43
    • We can now implement the following using route() insert (key, object) delete (key) obj = lookup (key) 44
    • Joining Procedure 45
    • Suppose node 0212 wants to join. It finds a node (e.g., 1211) to run route(0212) 0211 1211 0210 0201 0100 46
    • Routing tables entries are copied from nodes encountered a long the way. 0211 1211 0210 0201 0100 47
    • Leaf sets are initialized using the leaf sets of route(0212) 0211 1211 0210 0201 0100 48
    • Leaf Sets 49
    • Scribe Application-Level Multicast over Pastry 50
    • Recall: We use multicast to implement interest management 51
    • In application-level multicast, nodes forward messages to each other 52
    • who should forward to who? 53
    • Idea: Use Pastry’s routing table to construct the tree. 54
    • A group is assigned an ID, e.g., 0212 0210 0200 0221 0110 0001 2101 55
    • Node with ID closest to the group becomes the rendezvous point 0210 0200 0221 0110 0001 2101 56
    • 1200 join the group by routing a join message to group ID. 0210 1200 0200 0221 0110 0001 2101 57
    • The message stops once it reaches a node already on the multicast tree for group 0211 0210 1200 0200 join 0211 0221 0110 0001 2101 58
    • The path traversed by this join message becomes part of the multicast tree. 0210 1200 0200 0221 0110 0001 2101 59
    • To multicast a message, send to rendezvous point, which is then disseminated along the tree. 0210 1200 0200 0221 0110 0001 2101 60
    • Who store what? 61
    • Knutsson’s Idea: divide game world into regions and assign a region coordinator to keep the states in each region. mana=9, life=3 mana=5, life=1 : 62
    • When a player needs to read/write the state of an object, it contacts the coordinator. player X’s mana=10 63
    • Hash regions and nodes into the same ID space. The node whose ID is closest to the ID of a region becomes the coordinator. Game Map DHT ID space 64
    • The coordinator is likely to be not from the same region it is coordinating, reducing the possibility of cheating. Game Map DHT ID space 65
    • Once the update message reaches the coordinator, the coordinator informs all subscribers to the region through a multicast tree using Scribe. nodes interested in region edges in multicast tree 66
    • What if coordinator fails? 67
    • Route to Region 1111 2101 1201 1121 1110 68
    • If the coordinator fails, Pastry would route the messages to the next closest node. 2101 1201 x 1121 1112 69
    • Use the next closest node to the region as the backup coordinator. Game Map DHT ID space backup for 70
    • The primary coordinator knows the backup from its leaf set and replicates the states to the backup coordinator. 71
    • If the backup receives messages for a region, it knows that the primary has failed and takes over the responsibility. 72
    • Issues with Knutsson’s scheme 73
    • 1. No defense against cheating 74
    • 2. Large latency when (i) look for objects in a region (ii) creating new objects (iii) update state of objects 75
    • 3. Extra load on coordinators 76
    • 4. Frequent changes of coordinators for fast moving players. 77
    • Knutsson’s design is for MMORPG game (slow pace, tolerate latency) 78
    • Can similar architecture be used for FPS games? 79
    • Can we reduce the latency? 80
    • 1. Caching to prevent frequent lookup 81
    • 2. Prefetching objects that is near AoI to reduce delay 82
    • 3. Don’t use Scribe. Use direct connection as in VON 83
    • Recap 84
    • Without a trusted central server: 1. how to order events? 2. how to prevent cheat? 3. how to do interest management? 4. who should store the states? 85
    • Many interesting proposals, but no perfect solution. 1. Increase message overhead 2. Increase latency 3. No conflict resolution 4. Cheating 5. Robustness is hard 86
    • Many tricks we learnt from pure P2P architecture is useful if we have a cluster of servers for games “P2P among servers” 87
    • Part III of CS4344 Hybrid Architecture 88