4. Peer-to-Peer Networks
No central server
Node to node connections
Resilient
Large #messages
File sharing, distributed computing, instant messaging
7. Chord
Structured Peer-to-Peer architecture
Well-known in the research field
Combine advantages of Napster and
Gnutella-like architectures:
An index of resources (Napster)
distributed over multiple nodes (Gnutella)
Based on Distributed Hash Tables
Simple lookup mechanism
Given a key, it will return associated value(s)
8. 10, 16, 28, 34
Distributed Hash Tables
Hash functions map data keys to buckets
Keys are stored in buckets in index table
0
1
2
3
4
5
f(x) = x mod 6
f(15) = 15 mod 6 = 3
12, 24, 30
19, 61
20, 26, 56
3, 9
23, 35, 65
f(36) = 36 mod 6 = 0
Distributed Hash Tables
Each node hosts part of the index (one or
more buckets)
11. ROME Concept
Message cost is proportional to number
of nodes in network (n)
Reduce n, reduce message cost
Goal: Keep the ring “just big enough”
Must always support current workload
Not unnecessarily large
Workload should determine ring size,
not number of nodes in the network
Adding functionality to Chord
22. Replace Action
Search node pool to find a node with:
NPNode.LowerThreshold < OLNode.Workload AND
NPNode.UpperThreshold > OLNode.Workload
Break ties by selecting best
quality node
Percentage of heartbeats
received from node since
initial registration
Measure of node reliability
27. Add Action
No nodes found to replace overloaded
node
Add a node to share enough
workload to restore overloaded
node to normal workload
NPNode.LowerThreshold <
(OLNode.Workload - OLNode.Target)
AND
NPNode.UpperThreshold >
(OLNode.Workload - OLNode.Target)
35. Remove Action
If node is removed, successor
becomes responsible for its portion of
keyspace (+ workload)
Must check successor will not become
overloaded: prevent chain reactions
Succ.UpperThreshold >
Succ.Workload + ULNode.Workload
38. Additional Issues
Node Locking
Prevent chain reactions/overcompensation
Workload from a single key
Add action not attempted: assumption that
keys are atomic
Ring collapse vulnerability
Monitoring interval to stop too many
concurrent changes
High volume of replaces
Switch off replace action if few nodes in pool
39. Dynamic Ring Performance
Reduction in hop count by controlling
network size based on theoretical work:
Max hops per lookup = log2(n)
Mean hops per lookup = ½log2(n)
If ring A < ring B, then log2(A) < log2(B)
- in a static ring with correct routing information
Does this hold true in more realistic
dynamic scenarios, with nodes
joining/leaving or failing?
40. Simulation: ROME vs Chord
Two Chord rings, one running ROME
1000 available nodes
Node capacity: 100-400 workload units
10 node joins per time tick
10 node failures per tick
500 lookups per tick
ROME Thresholds: 5% Lower, 95% Upper
Target Workload: 50%
Standard Chord maintenance/update routines
run every clock tick
52. Limitations of ROME
Requires workload to be less than node
capacity
May not be appropriate if likely to be near-equal for
majority of network’s lifetime
Optimisations occur at the node level
Always yields a globally optimum solution?
Based on Chord and DHT architectures
Any issues found (probably) inherited by ROME
Increasing workload, failed bootstrap server
No more nodes will be added – these would be present in
standard Chord ring, so ROME ring would drop more
workload
53. Potential Applications
Similar to Chord applications
DNS lookup services
File storage systems
Simple databases
Messaging
Service Discovery
Distributed Processing
Where workload is likely to be lower than
capacity offered by connected nodes
54. Future Work
Remove reliance on single server
Share node pools between multiple bootstrap
servers using G-ROME
Combinations of actions
Apply ROME concepts to other P2P
networks
Use in unstructured networks?
Applications in other domains
Wireless/ad-hoc networks with dynamic
machine joins/leaves
55. Conclusions
Proposed ROME, a layer running on
top of Chord
Chord routes messages in O(log2 n) hops,
where n is number of nodes in ring
ROME controls size of underlying Chord ring
Simple set of actions to add/remove nodes
Simulations show ROME can reduce
lookup cost vs standard Chord ring
Platform for further work
Enhance ROME, use as building block for new
services, apply to other domains
56. Publications List
J Salter and N Antonopoulos, “An Optimised Two-Tier P2P Architecture for Contextualised
Keyword Searches”, Future Generation Computer Systems, 2007.
G Exarchakos, J Salter and N Antonopoulos, “G-ROME: A Semantic Driven Model for Capacity
Sharing Among P2P Networks”, to appear in Internet Research.
G Exarchakos, J Salter and N Antonopoulos, “Semantic Cooperation and Node Sharing Among
P2P Networks”, Sixth International Network Conference (INC 2006).
J Salter and N Antonopoulos, “The CinemaScreen Recommender Agent: A Film Recommender
Combining Collaborative and Content-Based Filtering”, IEEE Intelligent Systems, 2006.
N Antonopoulos, J Salter and R Peel, “A Multi-Ring Method for Efficient Multi-Dimensional Data
Lookup in P2P Networks”, 2005 International Conference on Foundations of Computer
Science (FCS ’05).
J Salter, N Antonopoulos and R Peel, “ROME: Optimising Lookup and Load Balancing in DHT-
based P2P Networks”, 2005 International Conference on Parallel and Distributed
Processing Techniques and Applications (PDPTA ’05).
J Salter and N Antonopoulos, “ROME: Optimising DHT-based Peer-to-Peer Networks”, Fifth
International Network Conference (INC 2005).
N Antonopoulos and J Salter, “Efficient Resource Discovery in Grids and P2P Networks”,
Internet Research, 2004.
J Salter and N Antonopoulos, “An Efficient Fault Tolerant Approach to Resource Discovery in
P2P Networks”, UniS Computing Sciences Report CS-04-02, 2004.
N Antonopoulos and J Salter, “Improving Query Routing Efficiency in Peer-to-Peer Networks”,
UniS Computing Sciences Report CS-04-01, 2004.
J Salter and N Antonopoulos, “An Efficient Mechanism for Adaptive Resource Discovery in
Grids”, Fourth International Network Conference (INC 2004).
N Antonopoulos and J Salter, “Towards an Intelligent Agent Model for Resource Discovery in
Grid Environments”, IADIS International Conference Applied Computing 2004.