Thesis Presentation P2 P Vo D On Internet Rodrigo Godoi

801 views

Published on

Published in: Technology
1 Comment
0 Likes
Statistics
Notes
  • Be the first to like this

No Downloads
Views
Total views
801
On SlideShare
0
From Embeds
0
Number of Embeds
4
Actions
Shares
0
Downloads
26
Comments
1
Likes
0
Embeds 0
No embeds

No notes for slide

Thesis Presentation P2 P Vo D On Internet Rodrigo Godoi

  1. 1. UniversitatAutònoma de Barcelona<br />Computer Architecture & Operating Systems Department <br />P2P-VoD on Internet: Fault Tolerance and Control Architecture<br />Rodrigo Godoi<br />Advisor: Dr. Porfidio Hernández Budé<br />Barcelona, July 2009.<br />
  2. 2. Contents<br />
  3. 3. Contents<br />
  4. 4. Video on Demand - VoD<br /><ul><li>Multimedia service
  5. 5. Asynchronous requests
  6. 6. Every client enjoys entire content
  7. 7. Long sessions (> 60 min.)</li></li></ul><li>VoD - requirements and constraints<br />Scalability<br /> Large scale Video on Demand - LVoD<br /><ul><li>Clients: thousands and disperse
  8. 8. Multimedia contents: huge catalogue</li></ul>Multicast<br />Peer-to-Peer<br />Internet<br />Soft real-time<br /><ul><li>Time limit on handling data
  9. 9. Quality of Service (QoS)</li></ul>Control Architecture<br />Fault Tolerance<br />
  10. 10. Multicast<br />- implementations<br />IP Multicast: <br /><ul><li>Source tree (e.g. PIM-DM)
  11. 11. Shared tree (e.g. PIM-SM)</li></ul>Application Layer Multicast ALM <br />(e.g. NICE, ALMI)<br />Overlay Multicast<br />(e.g. OMNI)<br />
  12. 12. Multicast- Patching<br />Patching: multicast technique for multimedia data delivery<br />Base stream<br />t= 0<br />t = 6<br />Multicast<br />Unicast<br />Patchstream<br />
  13. 13. Peer-to-Peer<br />Free cooperation of equals in view of the performance of a common task.<br /><ul><li>Takes advantage of resources (storage, cycles, content, human presence) available at the edges of the Internet.
  14. 14. Usage: file sharing, distributed multimedia systems, high performance computing.</li></ul>Synchronised usage of peers resources: Collaboration Groups<br />
  15. 15. Tracker<br />Peers<br />Peers<br />Peers<br />Supernodes<br />Peer-to-Peer - classification<br />Peers<br />Location mechanism<br />
  16. 16. Peer-to-Peer - classification<br />Overlay topology<br />Chain<br />Mesh<br />Tree<br />
  17. 17. Internet environment<br /><ul><li>Worldwide scale
  18. 18. Heterogeneous environment
  19. 19. Best-effort service
  20. 20. Exponential growth rate </li></ul>Organisation:<br /><ul><li>Autonomous Systems (AS): collection of connected IP routing prefixes under the control of one or more network operators (ISPs, universities, companies)
  21. 21. Network arranged by dimension and purpose (LAN, WAN, MAN)
  22. 22. Modeled by complex network theory</li></ul>Clustering coefficient<br />Average path length<br />
  23. 23. Problem<br />Large scale<br /> system<br />Failures<br />Network <br />Server <br />Peers <br />Frequent arrivals/departures<br />
  24. 24. Problem<br />Failures/Errors treatment <br />Input rate fluctuation<br />Source crash<br />Cushion buffer<br />QoS<br />Start-up delay<br />VoD service must…<br /><ul><li>respect deadlines
  25. 25. provide low start-up delay
  26. 26. have a clever buffer usage
  27. 27. enforce low control overhead</li></ul>Control Architecture<br />Fault Tolerance<br />
  28. 28. Control relevance<br />P2P and Multicast<br />Heterogeneity: Internet, peers capabilities, lifetimes<br />Resources sharing<br />Delivery Architecture<br />Control Architecture<br />
  29. 29. FaultTolerance<br />Consequence of a failure<br />System defect<br />Do not solve the fault<br />
  30. 30. State of the art<br />
  31. 31. Contents<br />
  32. 32. Goal of the Thesis<br />To assess Control impact and propose a Fault Tolerance Scheme for P2P-VoD service on the Internet.<br /><ul><li>Scalability
  33. 33. Flexibility
  34. 34. Reliability
  35. 35. Efficiency
  36. 36. Low overhead</li></ul>QoS<br />
  37. 37. Contents<br />
  38. 38. System architecture<br />Clients overlay topology<br />Distributed proxy servers<br />Distributed video servers<br />Internet<br />Clients<br />P2P Collaborations<br />Servers overlaytopology<br />IP Multicast zone<br />Internet Autonomous System<br />
  39. 39. The Failure Management Process<br />Basis of Fault Tolerance Mechanisms<br /><ul><li>Income stream monitoring
  40. 40. Heartbeat messages
  41. 41. Centralised
  42. 42. Subsequent queries</li></ul>Detection<br />Recovery<br />Maintenance<br /><ul><li>Network infrastructure
  43. 43. Peer status</li></li></ul><li>Load and Time metrics<br />Load cost<br />Volume of control messages that flows through the system on failure management processes <br /> Control overhead - congestion, bandwidth consumption<br />Time cost<br />Time consumed by solving peer failures <br /> Control efficiency - start-up delay, buffer usage<br />
  44. 44. Background: VoD service schemes<br />Gather different aspects of P2P-VoD services<br />
  45. 45. PCM/MCDB<br />PCM<br />MCDB<br />PCM: Patch Collaboration Manager<br />MCDB: Multicast Channel Distributed Branching<br />Bypass <br />
  46. 46. FaultTolerance - PCM/MCDB<br />MCDB<br />Maintenance messages<br />Ch.M0<br />Ch.M1<br />Recovery messages<br />Ch.M2<br /><ul><li>Centralised recovery.
  47. 47. IP Multicast tree rearrangement</li></ul>Detection messages<br />
  48. 48. P2Cast<br />Base Stream<br />Patch Stream<br /><ul><li>Clients are divided into sessions according to the arrival time in the system (session threshold parameter - T)
  49. 49. Best-fit algorithm: Peer with great amount of available bandwidth is selected as parent</li></ul>VoD Server<br />Session 4<br />Session 3<br />31.0<br />20.0<br />34.0<br />35.0<br />24.0<br />21.0<br />37.0<br />35.8<br />39.9<br />35.5<br />27.0<br />40.0<br />T<br />
  50. 50. FaultTolerance - P2Cast<br /><ul><li>Source peers (Parents) failures provoke stream disruption
  51. 51. Subsequent recovery queries</li></ul>Detection messages<br />VoD Server<br />Recovery messages<br />Session 4<br />Session 3<br />31.0<br />20.0<br />34.0<br />35.0<br />24.0<br />21.0<br />37.0<br />35.8<br />39.9<br />35.5<br />27.0<br />40.0<br />
  52. 52. Load cost<br />Control messages<br />PCM/MCDB<br />P2Cast<br />Heartbeats<br />Heartbeats<br />Detection<br />IP multicast rearrangement<br />Recovery request<br />Subsequent queries<br />Recovery<br />Peers status<br />Routers status<br />Routers status<br />Maintenance<br />
  53. 53. Time cost<br />Time consumption<br />Recovery messages<br />Detection<br />Path (network theory): small-world effect<br />latency<br />PCM/MCDB<br />Subsequent queries<br />Recovery messages<br />Detection<br />Path (network theory): small-world effect<br />latency<br />P2Cast<br />
  54. 54. Contents<br />
  55. 55. TheFaultToleranceScheme (FTS)<br />Cushion<br />Delivery<br />Collaboration<br />Altruist<br />FTS stands on peers capabilities:<br /><ul><li>Input / Output bandwidth
  56. 56. Buffer size</li></ul>bwo<br />bwi<br />buffer<br />8<br />9<br />10<br />11<br />12<br />L<br />6<br />7<br />13<br />Fault Tolerance Groups<br />Buffer In<br />Buffer Out<br />
  57. 57. TheFaultToleranceScheme (FTS)<br />MN<br />C1<br />C2<br />Cushion<br />Delivery<br />Gen. purpose<br />Altruist<br />MN<br />Fault Tolerance Groups<br />MN<br />MN<br />L<br />1<br />2<br />3<br />1<br />2<br />3<br />t = 0<br />L<br />4<br />5<br />1<br />2<br />3<br />7<br />8<br />9<br />4<br />5<br />6<br />1<br />2<br />3<br />10<br />7<br />MN<br />13<br />14<br />L<br />15<br />10<br />11<br />12<br />8<br />9<br />4<br />5<br />1<br />2<br />3<br />t = 0<br />t = 0<br />C1<br />7<br />L<br />4<br />5<br />6<br />1<br />2<br />3<br />6<br />7<br />C1<br />10<br />11<br />L<br />12<br />7<br />8<br />9<br />4<br />5<br />6<br />9<br />10<br />6<br />7<br />8<br />13<br />14<br />Video<br />8<br />9<br />10<br />6<br />7<br />13<br />14<br />15<br />11<br />12<br />3<br />4<br />5<br />1<br />2<br />Manager Node<br />MN<br />C2<br />7<br />L<br />4<br />5<br />6<br />1<br />2<br />3<br />14<br />15<br />11<br />12<br />13<br />t = 3<br />t = 3<br />t = 10<br />C1<br />Video<br />8<br />9<br />10<br />6<br />7<br />13<br />14<br />15<br />11<br />12<br />3<br />4<br />5<br />1<br />2<br />7<br />Video<br />8<br />9<br />10<br />6<br />13<br />14<br />15<br />11<br />12<br />3<br />4<br />5<br />1<br />2<br />FTS Collaborators<br />[t = 17]<br />
  58. 58. Load and Time costswiththe FTS<br />The proposed Fault Tolerance Scheme…<br /><ul><li>distributes the control through Manager Nodes
  59. 59. removes subsequent queries during recovery
  60. 60. eliminates messages for peers status maintenance
  61. 61. can detect failures through heartbeats (FTS I) and income stream monitoring (FTS II) </li></li></ul><li>Contents<br />
  62. 62. Simulation tool: VoDSim<br />Computational simulations provide a more dynamic and scalable analysis<br /><ul><li>Discrete event-driven model
  63. 63. More than 50 classes in C++
  64. 64. Over 46.000 lines
  65. 65. Peer arrival rate: Poisson
  66. 66. Content popularity: Zipf</li></li></ul><li>VoDSim extensions<br /><ul><li>Implementation of ALM service scheme: P2Cast
  67. 67. Peers disruptions: Weibull
  68. 68. FMP instrumentation: </li></ul>Load and Time costs measurement<br />Fault probability<br />Lifetime<br />
  69. 69. Contents<br />
  70. 70. Experimental Results<br />
  71. 71. Failure Management Process validation<br />
  72. 72. Control vs. Multimedia traffic<br />Analytical results (PCM/MCDB and P2Cast)<br />∆w = 13%-39%<br />∆w =13%- 37%<br />∆w = 10%-28%<br />Simulated results (P2Cast)<br />
  73. 73. Load cost analysis<br />
  74. 74. Load cost analysis<br />
  75. 75. Time cost analysis<br />High latency<br />Time cost increment<br />Recovery control messages<br />
  76. 76. Time cost analysis<br />Download rate: 1500 kb/s (750+750)<br />Cushion buffer <br /> 56MB <br /> 11MB<br />1 min.<br />5 min.<br />Start-up delay<br />
  77. 77. Experimental Results<br />
  78. 78. Load cost analysis<br />Cost increment<br />Cost reduction<br />FTS I - heartbeat detection<br />FTS II - buffer monitoring detection<br />
  79. 79. Load cost analysis<br /><ul><li>Overhead reduction
  80. 80. Scalability</li></li></ul><li>Time cost analysis<br />High latency<br />Time cost increment<br />Volume of communication<br />FTS<br /><ul><li>Efficiency</li></ul>Cushion buffer <br /> 56MB <br /> 11MB<br />Start-up delay<br />5 min.<br />1 min.<br />τ = 1/(2·fHB)<br />FTS I - heartbeat detection<br />FTS II - buffer monitoring detection<br />
  81. 81. Fault Tolerance service performance<br /><ul><li>Reliability
  82. 82. Flexibility</li></ul>Altruist buffer 338MB<br />Altruist buffer 102MB<br />
  83. 83. Contents<br />
  84. 84. Conclusions<br />Control mechanism plays a crucial role on designing P2P-VoD systems<br />Load cost<br />Control overhead: network congestion, bandwidth resources <br />Time cost<br />Efficiency: buffer usage, start-up delay<br />Load and Time costs trade-off<br />Reduction of Load and Time costs<br />Quality of Service<br />
  85. 85. Conclusions<br />The Fault Tolerance Scheme…<br /><ul><li>is flexible for Internet use
  86. 86. presents hierarchical control structure
  87. 87. has scalable backup mechanism
  88. 88. do not demand extra data communication and dedicated resources
  89. 89. is able to guarantee system reliability
  90. 90. reduces Load and Time costs</li></li></ul><li>Contents<br />
  91. 91. Future Work<br /><ul><li>Application and assessment of the FTS in a wide range of VoD architectures and service policies
  92. 92. Implementation of the FTS in a simulation environment
  93. 93. FTS improvement: storing parts of non-visualized contents; using non-volatile storage devices (e.g. Solid State Disk drives)
  94. 94. Addition of VCR / DVD-like operations
  95. 95. Usage of clients behaviour information to improve system performance</li></li></ul><li>P2P-VoD on Internet: Fault Tolerance and Control Architecture<br />Rodrigo Godoi<br />Thankyou<br />Gracias<br />Obrigado<br />Barcelona, July 2009.<br />
  96. 96. TheFaultToleranceScheme (FTS)<br />Server<br />Fault Tolerance Group<br />Control comm.<br />Manager Node<br />Clients<br />FTG members<br />Architecture elements<br /><ul><li>Server: content seed
  97. 97. Peer: multimedia client / source
  98. 98. FTG member: collaborator in the FTS
  99. 99. Distributed backup: flexibility and reliability.
  100. 100. Built on the fly: backup do not need retransmission.
  101. 101. P2P based: mechanism uses own system available resources.
  102. 102. Hierarchical control: scalability and deployment.
  103. 103. Manager Node: organize and monitor FTG</li></li></ul><li>The FTS formationlaw<br />While<br />If <br />then <br />Add Collaborator to FTG<br />If <br />Peers’ bandwidth greater than playback rate (bw≥Vpr)<br />then <br /> New FTG.<br />Input parameters<br />FTG size<br />Distributed backup<br />Service conditions<br />
  104. 104. The FTS formationlaw<br />While<br />MN<br />C1<br />C2<br />C3<br />A · 2<br />A · 3<br />A · 5<br />A · 5<br />15<br />15<br />15<br />15<br />If<br />then <br />Add Collaborator to FTG.<br />If <br />if:<br />Peers’ bandwidth lower than playback rate (bw&lt;Vpr)<br />then <br />New FTG.<br />if:<br />A<br />Video<br />C<br />B<br />D<br />E<br />H<br />G<br />F<br />…<br />Collaboration Capacity<br />A<br />Buffer and Bandwidth constraints<br />…<br />C1<br />A*<br />B*<br />…<br />A*<br />B*<br />C2<br />FTG size<br />MN [500kb/s]<br />C1 [200kb/s]<br />C2 [300kb/s]<br />C3 [500kb/s]<br />Vpr [1500kb/s]<br />…<br />A*<br />B*<br />C3<br />MN<br />A*<br />…<br />B*<br />
  105. 105. TheFaultToleranceScheme (FTS)<br />MN<br />C1<br />C2<br />Client<br />MN<br />I<br />II<br />III<br />IV<br />Creation of Fault Tolerance Groups<br />Local Server<br />Collab. availability<br />FTS ack.<br />Join to FTG<br />Start new FTG and become Manager Node<br />Standby status<br />
  106. 106. TheFaultToleranceScheme (FTS)<br />Standby Peer<br />I<br />II<br />FTG: complexity and maintenance<br />O(NCFTG)<br />MN<br />MN <br />failure<br />Local Server<br />Member failure<br />C1<br />C1<br />C2<br />Designation of new MN<br />Restoring the FTG<br />Restoring the FTG<br />
  107. 107. Evaluation environment<br />Underlying network: GT-ITM topology generator<br />Transit-stub model<br /><ul><li> 1Transit domain (3 routers)
  108. 108. 6 Stub domain (54 routers)</li></ul>Service schemes<br /><ul><li>ALM / tree-based P2P (P2Cast)
  109. 109. IP Multicast / mesh-based P2P (PCM/MCDB)</li></ul>Network protocols<br /><ul><li>Unicast: OSPF
  110. 110. IP Multicast: IGMP and PIM-SM</li></li></ul><li>Evaluation environment<br />Network Protocols<br />
  111. 111. Conclusions - publications<br />

×