(Slides) P2P video broadcast based on per-peer transcoding and its evaluation on PlanetLab

  • 2,107 views
Uploaded on

Shibata, N., Yasumoto, K., and Mori, M.: P2P Video Broadcast based on Per-Peer Transcoding and its Evaluation on PlanetLab, Proc. of 19th IASTED Int'l. Conf. on Parallel and Distributed Computing and …

Shibata, N., Yasumoto, K., and Mori, M.: P2P Video Broadcast based on Per-Peer Transcoding and its Evaluation on PlanetLab, Proc. of 19th IASTED Int'l. Conf. on Parallel and Distributed Computing and Systems (PDCS2007), (November 2007).

http://ito-lab.naist.jp/themes/pdffiles/071121.shibata.pdcs2007.pdf

More in: Technology , Business
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads

Views

Total Views
2,107
On Slideshare
0
From Embeds
0
Number of Embeds
3

Actions

Shares
Downloads
21
Comments
0
Likes
0

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide
  • 近年, BB の普及とともに,ビデオストリーミングによる映像配信が注目されています. しかし,映像再生環境はユーザ毎に異なっており,また,ユーザ毎に違った好みを持っています. ユーザ数の増加に対応できるスケーラビリティの確保も重要な課題といえます. 本研究では, 異なる品質の要求を持つユーザに対し,スケーラビリティを確保した上で, ユーザの満足度が高くなるよう,マルチオブジェクトビデオを効率よく配信する手法を提案いたします.
  • 近年, BB の普及とともに,ビデオストリーミングによる映像配信が注目されています. しかし,映像再生環境はユーザ毎に異なっており,また,ユーザ毎に違った好みを持っています. ユーザ数の増加に対応できるスケーラビリティの確保も重要な課題といえます. 本研究では, 異なる品質の要求を持つユーザに対し,スケーラビリティを確保した上で, ユーザの満足度が高くなるよう,マルチオブジェクトビデオを効率よく配信する手法を提案いたします.
  • まず,マルチバージョン法について. マルチバージョン法では.サーバがあらかじめ異なる品質のビデオデータを持っています. ユーザの要求に対して,適切なデータを選んで配信します. この方法は遅延が少ないことが利点です. しかし,様々なビットレートのビデオをサーバが保持する必要があり,サーバに大きなディスク容量を確保する必要があります. ユーザの満足度もそれほど大きくはならないと言えます.
  • 次に,オンライントランスコード法について オンライントランスコード法ではネットワーク上あるいはサーバにプロキシを用意します. ユーザの要求よりプロキシがデータを適切にトランスコードして配信します. 満足度はマルチバージョン法に比べて高くなったといえますが, プロキシサーバを用意するためのコストが増えます. プロキシサーバの負荷が高いことも問題です.
  • 三つ目は階層化マルチキャスト法について 階層化マルチキャスト法ではあらかじめビデオのデータを階層化エンコーディングしておきます. 階層化エンコーディングされたデータは,ベースレイヤーのみを再生することができます. また,ベースレイヤーと 2nd レイヤーを併せて, 500k のストリームとして再生することができます. ユーザの要求より,適切な層までのデータを配信します. マルチキャストの配信方法を使用すると,サーバ側の負荷は軽くなります あらかじめデータを階層化しておく必要があります. 満足度が階層化の程度と依存します. 階層化の層の数を多くすると,満足度が上がりますが,再生側の負荷が重くなります.
  • 提案手法である,多段トランスコードマルチキャスト方式のアイデアを説明します. サーバには,高画質のビデオデータを一つだけ持っておきます. 1つ目のユーザが, 800k の品質を要求したとします. サーバはビデオデータを 800 kにトランスコードして,ユーザ1に送信します. 次に,ユーザ2が 500k の品質を要求したとします. ユーザ1は 800 kのビデオを受信,再生しながら,同時に 500 kにトランスコードして,ユーザ 2 に送信します. 同様に,ユーザ2はユーザ3に 300k にトランスコードして送信します. 本方式により,計算量,ネットワークの負荷を分散でき,満足度も向上することが期待できます. しかしながら,いくつものノードを経由して受信すると,遅延が大きくなることも考えられます.

Transcript

  • 1. P2P video broadcast based on per-peer transcoding and its evaluation on PlanetLab Naoki Shibata, † Keiichi Yasumoto, Masaaki Mori Shiga University, † Nara Institute of Sci. and Tech.
  • 2. Motivation
    • Watching TV on various devices
      • Screen resolution of mobile phone : 96x64 ~
      • Screen resolution of Plasma TV : ~ 1920x1080
      • Video delivery method for wide variety of devices
        • Screen resolution
        • Computing power
        • Available bandwidth to Internet
    • Popularization of P2P video delivery
      • Joost
      • Zattoo
  • 3. Overview of this presentation
    • Improvement to our previously proposed video delivery method named MTcast
      • Features of (previous) MTcast
      • Video delivery method based on P2P video streaming
      • Serves requests with different video qualities
      • Scalable with number of users
      • New improvement
      • Reduced backbone bandwidth for further scalability
    • Evaluation of performance on PlanetLab
      • Implementation in Java language
      • Evaluation in PlanetLab environment
  • 4. Outline
    • Background
    • Related works
    • MTcast overview
    • Implementation overview
    • Evaluation
  • 5. Multiversion method
    •  
    • Minimum delay
    • No need of transcoding
    • Low user satisfaction : # of served video qualities = # of versions
    • High network load
    500 k G. Conklin, G. Greenbaum, K. Lillevold and A. Lippman : ``Video Coding for Streaming Media Delivery on the Internet,’’ IEEE Transactions on Circuits and Systems for Video Technology, 11(3), 2001. 300 k … request 200k deliver 300 k request 400k deliver 500 k
  • 6. Online transcoding method
    • S. Jacobs and A. Eleftheriadis :
    • ``Streaming Video using Dynamic Rate Shaping and TCP Flow Control,’’
    • Visual Communication and Image Representation Journal, 1998.
    1000k
    •   Higher user satisfaction
    •   Additional cost for proxies
    •   # of qualities is restricted by capacity of proxies
    Server Transcode Proxies 1000k 1000k 300k 500k 700k
  • 7. Layered multicast method
    • Low computation load on server
    • User satisfaction depends on # of layers
    • Limitation on # of layers … High CPU usage to decode many layers
    J. Liu, B. Li and Y.-Q. Zhang : ``An End-to-End Adaptation Protocol for Layered Video Multicast Using Optimal Rate Allocation,’’ IEEE Transactions on Multimedia, 2004. … +500k 200k Base layer 300k 2nd layer 200k 200k 200k +300k +300k
  • 8. Outline
    • Background
    • Related works
    • MTcast overview
    • Implementation overview
    • Evaluation
  • 9. Service provided by MTcast
    • Network environment
      • Wide area network across multiple domains
    • Number of users
      • 500 to 100,000
    • Kind of contents
      • Simultaneous broadcast of video - same as TV broadcast
      • New user can join to receive delivered video from the scene currently on broadcast
    • Kind of request by users
      • Each user can specify bit rate of video
        • We assume resolution and frame rate are decided from bit rate
  • 10. Idea of MTcast Transcode 1000k Server Transcode Transcode request 800k reply 800k request 500k reply 500k request 300k reply 300k
  • 11. Building transcode tree User nodes Bit rate request 2000k Bit rate request 800k Bit rate request 300k Bit rate request 1200k Bit rate request 1500k Transcode tree is video delivery tree User nodes Bit rate request 2000k Bit rate request 2000k Bit rate request 1920k Bit rate request 1850k Bit rate request 1830k Sort
  • 12. Building transcode tree User nodes Bit rate request 2000k Bit rate request 2000k Bit rate request 1920k Bit rate request 1850k Bit rate request 1830k Bit rate request 1800k Bit rate request 1800k Bit rate request 1780k
    • Make groups of k user nodes from the top
    • Each group is called a layer
    • Minimum requested bit rate for each layer is actually delivered to the layer
    A constant value decided on each video broadcast Delivered bit rate for each layer
  • 13. Building transcode tree
    • Put each layer at the place of nodes in a binary tree
      • In the order of depth first search
    • Construct a modified binary tree
    2000k 1800k 1500k 1300k 1100k 900k 700k Video server
  • 14. Advantages of building tree in this manner
    • Videos in many qualities can be served
      • Number of qualities = Number of layers
    • Each node is required to perform only one transcoding
    • Length of video delivery delay is O(log( # of nodes ))
    • Tolerant to node failures
    2000k 1800k 1500k 1300k 1100k 900k 700k Video server
  • 15. Recovery from node failure
    • No increase in number of video transcoding on each node
    • Degree of tolerance of node failure depends on :
      • Number of nodes in each layer
        • If there are many nodes in layer, it has greater tolerance of failure
      • Available bandwidth on each node
    • Buffered video data is played back during recovery
      • Users never notice node failures
  • 16. Working of MTcast
    • Starting up
      • Accepts requests until predetermined time before beginning time of broadcast
      • Make a transcode tree, and distribute the info. to user nodes
      • Start delivery of video data
    • Serving requests from newly joining nodes
      • Temporarily handle the request using the bandwidth kept for node failure
      • The request is properly accepted during next reconstruction of transcode tree
    • Reconstructing transcode tree
      • Tree is gradually deformed as failure nodes and new join requests emerge
      • Transcode tree has to be reconstructed periodically
      • Buffered video data is played back during reconstruction
      • Each node is allowed to change bit rate request on reconstruction
  • 17. Extension for real world usage Each link is an overlay link Traffic may go back and forth many times between ASs Precious bandwidth between ASs is consumed Priority of connection is decided according to hop count and available bandwidth Nodes in service provider A Nodes in service provider B Nodes in service provider C Idea of extension Nodes in a same AS should connect in priority
  • 18. Outline
    • Background
    • Related works
    • MTcast overview
    • Implementation overview
    • Evaluation
  • 19. Design policy
    • Usable on PlanetLab
    • Usable in many similar projects
    • Easily modifiable
    • Good performance, if possible
    • Why not use JMF?
      • It’s not maintained
      • Huge buffering delay
  • 20. Modular design
    • We designed many classes for video delivery
      • Transcoder
      • Transmitter and receiver to/from network
      • Buffer
      • etc.
    • Each node is a set of instances of these classes
    • Each node instantiate these classes and connects the instances according to the command from a central server
    Workings of each node can be flexibly changed by changing commands from the central server
  • 21. Outline
    • Background
    • Related works
    • MTcast overview
    • Implementation overview
    • Evaluation
  • 22. Items of evaluation
    • Results of evaluation published in [9]
      • Computation load of transcoding
      • Computation load of making transcode tree
      • User satisfaction
    • Video quality degradation by transcoding
    • Effectiveness of the bandwidth reduction method
    • Time to start up the system on PlanetLab
    • Time to recover from node failure on PlanetLab
  • 23. Results of evaluation published in [9]
    • Computation load of transcoding
      • Measured computation load when video playback and transcoding are simultaneously executed
      • Measured on desktop PC, notebook PC and PDA
      • Result : All processing can be performed in real time
    • Computation load of making transcode tree
      • 1.5 secs of computation on Pentium 4 2.4GHz
      • Time complexity : O( n log n )
      • Network load : Practical if the computation node of transcode tree has enough bandwidth
    • User satisfaction
      • Satisfaction degree is defined as to [3]
      • Made a network with 6000 node using Inet 3.0
      • Satisfaction with our method was at least 6% higher than layered multicast method
        • Satisfaction becomes better as the number of nodes increases
  • 24. Video quality degradation by transcoding
    • Video quality may degrade by multiple transcoding
    • We measured PSNR value when video is transcoded in our method
      • We compared :
        • A video transcoded only once
        • A video transcoded multiple times
  • 25. Effectiveness of bandwidth reduction(1/2)
    • Compared physical hop count in transcode tree
      • By our method
      • By randomly selecting node to connect
    • Comparison by simulation
      • Number of user nodes : 1000
        • 333 nodes has bandwidth between 100 and 500kbps
        • 333 nodes has bandwidth between 2 and 5Mbps
        • 334 nodes has bandwidth between 10 and 20Mbps
      • Result of simulation
        • Hop count by the random method : 4088
        • Hop count by our method : 3121
        • 25% reduction of hop count by our method
  • 26. Effectiveness of bandwidth reduction(2/2)
    • Compared physical hop count in transcode tree
      • By our method
      • By randomly selecting node to connect
    • Comparison on PlanetLab
      • 20 user nodes on 7 countries
      • Result
        • Random selection : hop count 343, 361, 335
        • Our method : hop count 314, 280, 277
        • 16% reduction of hop count by our method
  • 27. Time to start up the system on PlanetLab
    • Measured time to the following events since beginning
      • All nodes complete establishing connection
      • All nodes receive the first one byte of data
    • Comparison on PlanetLab
      • 20 user nodes on 7 countries
      • Nodes are cascaded, not connected in tree
    • Result of evaluation
    • Observation
      • Most of time is consumed to establish connections
      • All operations are performed in parallel, and thus the observed time is the time for the slowest node to establish connection
  • 28. Time to recover from node failure on PlanetLab
    • Measured following time since a node failure
      • Establishing connection to a new node
      • Receiving data from the new node
    • Comparison on PlanetLab
      • 20 user nodes on 7 countries
      • Nodes are cascaded, not connected in tree
    • Result of evaluation
    • Observation
      • These are practical values
      • During recovery time, buffered data is played back, and thus user never notices node failure
  • 29. Conclusion
    • Improved MTcast
      • Bandwidth usage between ASs is reduced
    • Made a prototype system in Java
    • Evaluation on PlanetLab
    • Ongoing works include
      • Serving request of many parameters including picture size, framerate and audio channels
      • Further reduction of bandwidth between nodes