EVALUATION ON PLANETLAB
internal layer
                    Table 1. Target Environment                                               1500         ...
In MTcast, the transcode tree is constructed as a mod-
ified complete n-ary tree. So, as the value of n becomes
large, the ...
peers between the child layer and the parent layer, the phys-   modules like lost connection or the amount of buffered dat...
For the usefulness of MTcast, we also need to validate (4)                   Table 3. Time required for starting up
user s...
Table 5. Behavior in case of node departure
                            departure node                   new parent node  ...
Upcoming SlideShare
Loading in...5



Published on

Shibata, N., Yasumoto, K., and Mori, M.: P2P Video Broadcast based on Per-Peer Transcoding and its Evaluation on PlanetLab, Proc. of 19th IASTED Int'l. Conf. on Parallel and Distributed Computing and Systems (PDCS2007), pp. 478-483, (November 2007).


We have previously proposed a P2P video broadcast method called MTcast for simultaneously delivering video to user peers with different quality requirements. In this paper, we design and implement a prototype system of MTcast and report the results of its performance evaluation in the real Internet environment. MTcast relies on each peer to transcode and forward video to other peers. We conducted experiments on 20 PlanetLab nodes, evaluated startup delay and recovery time from peer leaving/failure, and confirmed that MTcast achieves practical performance in a real environment.

Published in: Technology
  • Be the first to comment

  • Be the first to like this

No Downloads
Total Views
On Slideshare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide


  1. 1. P2P VIDEO BROADCAST BASED ON PER-PEER TRANSCODING AND ITS EVALUATION ON PLANETLAB Naoki Shibata, Masaaki Mori Keiichi Yasumoto Dept. Info. Processing & Management Graduate School of Information Science Shiga University Nara Institute of Science and Technology 1-1-1 Bamba, Hikone 522-8522, Japan 8916-5 Takayama, Ikoma 630-0192, Japan email: {shibata, mori}@biwako.shiga-u.ac.jp email: yasumoto@is.naist.jp ABSTRACT trees to the receiver nodes while the server load is high. We have previously proposed a P2P video broadcast OMNI and CoopNet, unfortunately, do not treat heteroge- method called MTcast for simultaneously delivering video neous quality requirements. to user peers with different quality requirements. In this There are some approaches for heterogeneous quality paper, we design and implement a prototype system of MT- video delivery on P2P network. PAT [8] is a method to re- cast and report the results of its performance evaluation in duce computation resources necessary for on-line transcod- the real Internet environment. MTcast relies on each peer ing in P2P network by making peers share the part of data to transcode and forward video to other peers. We con- generated during encoding and decoding of video. We have ducted experiments on 20 PlanetLab nodes, evaluated start- also proposed MTcast (Multiple Transcode based video up delay and recovery time from peer leaving/failure, and multicast)[9] which achieves efficient and robust video confirmed that MTcast achieves practical performance in a broadcast to multiple heterogeneous users by relying on real environment. user peers to transcode and forward video. MTcast con- structs a tree whose root is the sender of the original video KEY WORDS content as a variation of a perfect n-ary tree, where peers P2P, Video, Broadcast, Transcoding, Overlay Network with higher quality requirements are located near the root of the tree. In this paper, aiming to validate practicability and ef- 1 Introduction fectiveness of MTcast in the real Internet environment, we design and implement MTcast system with a more effi- Recently, many types of computing terminals ranging from cient tree construction mechanism than the original MT- cell phone to HDTV are connected to the Internet, and con- cast. In the new tree construction mechanism, parent-child sequently efficient video delivery method for many termi- relationship between peers is decided based on bandwidth nals with different computation powers, display sizes and and topology measurement on physical path. We have available bandwidths is required. There are some studies also designed and implemented a prototype system of MT- on simultaneously delivering video to multiple users with cast. With the prototype, we have conducted experiments different quality requirements. In the multiversion tech- on the PlanetLab[10], and confirmed that MTcast achieves nique [1], multiple versions of a video with different bi- reasonably short start-up latency, small channel switching trates are stored at a server so that the most appropriate time, and small recovery time from peer leaving/failure. video can be delivered to each user within resource limi- tation. In the on-line transcoding method [2], the original 2 MTcast Overview video is transcoded at a server or one of intermediate nodes (i.e. proxy) to videos with various quality, according to re- In this section, we provide an overview of MTcast [9]. ceivers’ requests, and forwarded to the receivers. However, large computation power is required for transcoding by a 2.1 Target Environment server or an intermediate node. In the layered multicast method [3, 4], video is encoded with layered coding tech- MTcast is a peer-to-peer based video broadcast method for niques such as the ones in [5] so that each user can decode heterogeneous peers whose available bandwidths, compu- the video by receiving arbitrary number of layers, although tation powers, screen sizes, etc. are different. The target splitting into many layers imposes computation overhead environment of MTcast is shown in Table 1. on user nodes. MTcast does not treat the VoD (Video on Demand) Furthermore, there are many studies on video stream- service. Instead, it provides a service of broadcasting a ing in peer to peer networks such as OMNI[6] and the video during the scheduled time to peers requiring the CoopNet[7]. In OMNI(Overlay Multicast Network Infras- video, which is very similar to ordinary TV broadcasts. Af- tructure), each user node works as a service provider as ter a video broadcast starts, new users can join to receive well as a service user, and a multicast tree is composed of the video delivery from the scene currently on broadcast. user nodes so that the video delivery service is provided 2.2 Definitions to all the user nodes through the tree. OMNI can adapt to the change of the user node distribution and the network Let s denote a video server, and U = {u1 , ..., uN } de- conditions. In CoopNet, user nodes cache parts of stream note a set of user nodes. We assume that for each ui , data, and deliver them through multiple diverse distribution available upstream (i.e., node to network) bandwidth and
  2. 2. internal layer Table 1. Target Environment 1500 leaf layer 1300 700 Item Explanation 1100 900 500 300 Types of peers Desktop PC, laptop PC, PDA, cell phone, etc Peer’s home network ADSL, CATV, FTTH, 1400 1200 1000 800 600 400 200 100 WLAN, cellular network, etc connected to the Internet 1100 1100 900 500 Number of peers up to 100,000 Contents for broad- pre-recorded video Figure 2. Layer Tree Construction cast 800kbps 2.4 Construction of Transcode Tree 700kbps 600kbps In MTcast, the transcode tree is calculated in a centralized way by a peer uC ∈ U . We assume that the server s de- cides uC or behaves as uC . The tree construction algorithm 400kbps 300kbps 200kbps 500kbps consists of the following three steps. Step 1: For each peer u ∈ U , the algorithm checks if u Figure 1. Video Broadcast through a Transcode Tree (n = can transcode one or more video streams in real time by 2, k = 6) inequality (1), and can transmit n + 1 video streams at the same time by inequality (2). downstream (i.e., network to node) bandwidth can be mea- sured. We denote them by ui .upper bw and ui .lower bw, u.ntrans (u.q) ≥ 1 (1) respectively. Let ui .q denote ui ’s video quality require- u.nlink (u.q) ≥ n + 1 (2) ment. We assume that ui .q is given by the bitrate of video calculated by the required picture size and frame rate. Let If the above two inequalities hold for u, then u is put into ui .ntrans (q) denote the maximum number of simultaneous UI which is the set of internal peers, otherwise put into UL transcoding which can be executed by ui for videos with which is the set of leaf peers. s is always put into UI . If quality q. Let ui .nlink (q) denote the maximum number |UI | < n |U |, the whole network resource is not sufficient 1 of simultaneous forwarding of videos with quality q which to satisfy the quality requirements of all peers. In such a can be performed by ui . ui .ntrans (q) and ui .nlink (q) are case, we let |UL | − n−1 |U | peers in UL with larger up- n calculated from computation power of ui , ui .upper bw and stream bandwidths to reduce their quality requirements so video quality. that the inequalities (1) and (2) hold. Then, those peers are MTcast constructs an overlay multicast tree where s is moved to UI . By the above procedure, |UI | ≥ n |U | always 1 the root node and user peers in U are intermediate (internal) holds. or leaf nodes. Hereafter, this multicast tree is called the Step 2: Peers of UI are sorted in decreasing order of their transcode tree. quality requirements and every bunch of k peers is packed to an internal layer, where k is the constant defined in 2.3. 2.3 Structure of Transcode Tree Here, we select the first and second peers of each layer as Internal nodes in the transcode tree transmit a video stream the responsible peer and the sub-responsible peer of the to children nodes. In MTcast, each internal node basically layer, respectively. The average value of quality require- has n children nodes. The value of n is decided based on ments in each layer is called the layer quality, and all peers the available resource of peers as explained in Sect. 2.4. In in the layer adjust their quality requirements to the value. order to reduce the start-up delay for video playback and For the set of leaf nodes UL , elements are similarly packed the number of transcoding applied to video, the transcode to leaf layers. tree is constructed as a modified complete n-ary tree where Step 3: The algorithm sorts internal layers in decreasing degree of the root node is k instead of n (k is a constant order of the layer quality, and constructs the complete n- and explained later). The transcode tree is constructed so ary tree called the layer tree consisting of those internal that for each node ui and each of its children nodes uj , layers where the layer quality of each layer does not ex- ui .q ≥ uj .q holds. That is, from the root node to each leaf ceeds that of its parent layer. In MTcast, we use the depth node, the nodes are ordered in decreasing order of quality first search order to assign internal layers to the layer tree requirements. In order to tolerate node leave/failures, ev- as shown in Fig. 2. Next, the algorithm attaches each leaf ery k nodes in U are bunched up into one group. We call layer L to the internal layer whose layer quality is closest to each group a layer, where k is a predetermined constant. L. If the layer quality of L exceeds that of L’s parent layer, In MTcast, peers in the same layer receive video with the the layer quality of L is adjusted to that of L’s parent. Fi- same quality. This quality is called the layer quality. A nally, the transcode tree is obtained by assigning internal representative peer is selected for each layer. Parent-child nodes and leaf nodes to the corresponding internal layers relationship among all layers on the transcode tree is called and leaf layers in decreasing order of their required qual- the layer tree. An example of the transcode tree with n = 2 ity, respectively, and assigning parent-child relationships and k = 6 is shown in Fig. 1, where, small circles and big between peers according to the layer tree. ovals represent peers and layers, respectively. How to decide transcode tree degree n and layer size k
  3. 3. In MTcast, the transcode tree is constructed as a mod- ified complete n-ary tree. So, as the value of n becomes large, the tree height (i.e., the number of transcoding) also decreases. Since the required upstream bandwidth of each node increases in proportion of n’s value, the value of n must be carefully decided considering upstream bandwidth limitation of each node. In order to avoid reducing the qual- ity requirements in Step 1, we should decide the value of n (a) before failure (b) after recovery so that the number of peers satisfying the inequality (2) is no less than U . n Figure 3. Recovery from Node Failure The value of k affects the tolerance of leave/failure of peers. If f peers may leave from a layer at the same time before the transcode tree is reconstructed, the remaining using its extra upstream bandwidths similarly to the case of k − f nodes in the layer must transmit video streams to processing new delivery requests. An example is shown in n · k children nodes. So, the following inequalities must Fig. 3. hold in order to recover from f simultaneous failures in (3) Reconstruction of Transcode Tree The transcode each layer.Thus, the appropriate value of k can be decided tree is periodically reconstructed to adjust tentatively in- from values of n and f . creased degree (n + 1) to normal one (n). Peer uc recon- structs the transcode tree in the following steps. (k − f )u.nlink (q) ≥ n · k (3) First, uC collects quality requirements effective after k time tr from all peers along the layer tree. Then, it calcu- (k − f )u.ntrans ≥ n (4) lates the new transcode tree with the algorithm in Sect. 2.4, u.nlink (q) and distributes the information of the transcode tree to all peers. 2.5 MTcast Protocols At time tr , all nodes stop receiving streams from current parent nodes and the nodes in the root layer of Start-up Behavior Let t denote the time of video de- the new transcode tree starts to deliver video streams. livery. Each user who wants to receive video stream sends Nodes in internal layers also forward video streams after a video delivery request to the video server s before time receiving them. The video stream transmitted along the t − δ. At time t − δ, s calculates the transcode tree with new transcode tree arrives after a certain time lag due to the algorithm explained in Sect. 2.4. Here, δ is the time transcode and link latency. During the time lag, each node to calculate the transcode tree and distribute the necessary plays back video from its buffer. For the next reconstruc- information to all nodes. Next, s distributes the following tion of the transcode tree, the buffer of each node must be information I or I to all peers along T . filled with video data of the above time lag. This process is done by transmitting the video stream slightly faster than • I: information distributed to a responsible peer uR of its playback speed. a layer L All information of T including addresses of peers and their parent-child relationship, the structure of the 3 Implementation of MTcast System layer tree and the membership of peers, the addresses of responsible/sub-responsible peers, the layer quality In the original MTcast in [9], parent-child relationship of in each layer, and the next reconstruction time tr . peers in the transcode tree is decided without consideration of property of overlay links. However, in the Internet, an • I : information distributed to each peer p of layer L overlay link could be a long path with many hops in the other than L’s responsible peer physical network. So, first, we have designed and imple- The addresses of L’s responsible peer, p’s children mented effective decision of parent-child relationship be- peers, the responsible peer and sub-responsible peers tween peers. Then, aiming to investigate usefulness of MT- of L’s parent layer, and the next reconstruction time cast in the real Internet environment, we have implemented tr . belongs, MTcast system. We address details of the implementation in the following subsections. (2) Protocols for peer joining and leaving As explained before, each peer in an internal layer has an extra upstream 3.1 Decision of parent-child relationship between bandwidth for forwarding one more video stream. A user peers peer unew who has requested video delivery after time t can use this extra bandwidth to receive a video stream instantly. As we already explained, the algorithm in Sect. 2.4 does Here, the fan-out of the forwarding peer uf which sends a not make a distinction between overlay links over different stream to unew is allowed to be n + 1 tentatively. ASs (autonomous systems) and those within an AS. In this If one or more peers in a layer fail or suddenly leave subsection, we propose a technique to save bandwidths of from the transcode tree, all of their descendant nodes called inter-AS links without spoiling the advantages of MTcast. orphan peers will not be able to receive video streams. In As our basic policy, we consider the physical network MTcast, each orphan peer can find an alternative parent topology to decide the parent-child relationship of peers in peer by asking the responsible peer of the parent layer. The the transcode tree. responsible peer received such a request ask one of peers Let C and P be the sets of peers which belong to a in the layer to forward video stream to the orphan node by layer and its parent layer, respectively. For each pair of
  4. 4. peers between the child layer and the parent layer, the phys- modules like lost connection or the amount of buffered data ical path and the available bandwidth can be obtained with exceeding a threshold. In our design, the central process tools such as traceroute and Abing, respectively. Fro peers is informed of these changes through a queue for status c ∈ C and p ∈ P , let bw(c, p) and L(c, p) denote the change message arranged in the central process. In case available bandwidth measured with such tools (called mea- of status change, the corresponding module puts a message sured available bandwidth, hereafter) and the set of phys- into the queue, and the central process takes an appropriate ical links between c and p except for links connected to action according to the incoming message. nodes c and p, respectively. Next, we estimate the available bandwidth of each 3.3 Devices to facilitate video stream handling overlay link (called estimated available bandwidth, here- Although a mechanism similar to InputStream and Output- after) by considering some of links are shared among mul- Stream classes in the standard Java library can be used to tiple overlay links. For each link l ∈ L(c, p), the estimated exchange video streams between modules, it would be trou- available bandwidth dl and the sharing degree dl are de- blesome to realize a procedure like restarting transcoder fined as follows. with different parameters during processing a successive stream. In the prototype system design, we made an orig- ml = M in{ bw(c, p) | l ∈ L(c, p), c ∈ C, p ∈ P } inal class whereby each end of frame data is explicitly de- dl = | { c | l ∈ L(c, p), c ∈ C, p ∈ P } | clared by transmitter, and receiver reads data until end of frame data as usual, but after that, the read method returns Based on the measured available bandwidth ml and -2 like when it reaches EOF, and then the receiver contin- the sharing degree dl of each physical link, we decide the ues reading data of the next frame. With this specification, parent node of each child node as follows. each end of frame data can be easily processed without de- For each p ∈ P and c ∈ C, we investigate if c can re- stroying and remaking module instances. ceive video stream with the required quality. If only a peer Stream processing can be realized using JMF(Java (say, p1 ) in P can transmit a video stream with the required Media Framework), but it is known that there will be a large quality to c, then the p1 is assigned as the parent node of processing delay due to buffers placed on each processing c. If some of peers in P can transmit the video stream to module. In the prototype system, we tried to minimize pro- c, M axp∈P M inl∈L(c,p) mll which represents the peer with d cessing delay by reducing buffering on each module. In maximum available bandwidth per share, is assigned as c’s the system, while each module basically corresponds to a parent node. thread, we made the write method to block until the cor- We designed the prototype system with emphasis on responding reader thread reads all of written data by that the following two points: (i) ease of implementation and call. By this design, processing delay is reduced since (ii) reusablility of the finished software; when we change when a module invokes write method, control is immedi- the proposed method, the prototype system should be easily ately passed to the reader thread, and thus there is no pro- modified to comply with the new specification, and a large cessing delay due to thread switching interval of operating part of the software should be able to be reused in other system. similar projects. With these two requirements satisfied, we devised the prototype system not to sacrifice performance. 3.4 Devices to facilitate evaluation on many PlanetLab By adopting modular design, which is described below, we nodes made the process which runs on each node a generic pro- gram, and its functionality can be changed by changing the In order to facilitate evaluation using several tens of Plan- code for central server process. etLab nodes, we devised the following manner. Conditions of PlanetLab nodes momently change, and good nodes fre- 3.2 Modular design quently becomes unavailable after short period of time. In order to cope with this situation, we made a set of scripts In order to flexibly change formation of process on each which checks currently installed files on each node, and up- node and facilitate performance evaluation under various date them if necessary. The scripts consist of ones run on configurations, we designed the process on each node to the operator’s host, and ones run on PlanetLab nodes. The be a set of modules like buffer or transcoder. Process on former scripts copies necessary scripts and files on Planet- each node makes instance of these modules and connects Lab nodes using scp command, and execute them. This them according to commands from the central server pro- operation is executed in parallel. The latter scripts check cess. The central server process can also issue commands the version of installed files, and if necessary, download for particular modules on each node. We made these mech- and install them using wget command. We used these anisms using Java language and RMI(Remote Method In- scripts, and confirmed that it takes less than 30 minutes vocation). The modules include transcoder, video player, to install java run time environment to 30 PlanetLab nodes. network data receiver, transmitter, buffer, user interface, We also confirmed that installing java class files for the pro- configuration file reader, etc, and each of these modules totype system on these nodes takes about 1 minute. corresponds to one class. Each instance of the modules corresponds to one thread so that the modules can be pro- grammed more easily than event-driven style code. 4 Performance Evaluation Each instance made by commands from the central server process is registered in a hash table on the node with In order to show practicality of MTcast, we need to val- a unique key. Methods of these instances can be invoked idate (1) overhead of real-time transcoding by user peers later by referring this hash table. The prototype system and (2) network and processing overhead for constructing a should make some action according to status changes on transcode tree, (3) recovery time after peer leaving/failure.
  5. 5. For the usefulness of MTcast, we also need to validate (4) Table 3. Time required for starting up user satisfaction (i.e., whether each user can receive and play back video at his/her requested quality), (5) efficiency # of nodes establishing connection receiving data of the transcode tree (in terms of overlay link lengths), and 4 10,871ms 19,040ms (6) start-up delay (i.e., the time delay for video to be dis- 8 11,681ms 16,993ms played after its request). 12 15,307ms 15,307ms In [9], we already showed that the validity of the 16 11,317ms 18,781ms above (1), (2) and (4), and confirmed that desktop PCs and 20 12,144ms 17,562ms lap-top PCs have enough power for real-time transcoding of a video stream (for (1)), an ordinary PC can calculate Table 4. Delay introduced by forwarding the transcode tree with 100,000 peers in 1.5 second and the tree size is within 300Kbyte (for (2)), and MTcast achieves # of nodes delay higher user satisfaction than layered multicast using 10 lay- 14 7,002 ms ers through computer simulation (for (4)). 16 8,603 ms In Sect. 4.1, we present the detailed experimental re- 20 20,573 ms sults of applying our prototype system to show the validity 23 10,024 ms of the above (3), (5), and (6). 27 9,435 ms 4.1 Evaluation on PlanetLab nodes Time required for start-up In order to evaluate the performance of proposed method Table 3 shows the times required for starting up. The in the real environment, we operated our prototype system columns for establishing connection and receiving data in- on PlanetLab nodes. In this section, we describe the result dicate the required times since start of the system until of measurement for time required to join, delay introduced the last node establishes connection and until the last node by successive forwarding and transcoding, the behavior in receives first byte of data, respectively. Please note that case of abrupt departure of node, and effectiveness of band- these times include Java JIT compiler to compile byte code. width saving method for backbone links. Since all nodes establish connections in parallel, the ob- tained time is the maximum time required for nodes to es- tablish connection. Since leaf nodes have to receive data Configuration via their descendant nodes, the time to receive data is con- sidered to be proportional to the height of tree. But actually, We used video with pixel size of 180 × 120 and frame increase of time is not observed as the height of tree grows. rate of 15fps, without audio. We used motion JPEG as the This is because the time required to establish connection is CODEC of video. dominant. Anyway, these times are less than 20 seconds, We used TCP to transmit video data between nodes. and thus these values are practical. In order to evaluate scalability, we constructed the transcode tree as a cascade of layers with two nodes, in- stead of the modified n-ary tree. In the experiments, Delay introduced by successive forwarding and we used several tens of PlanetLab nodes including nodes transcoding showed in Table 2. We also used katsuo.naist.jp and wakame.naist.jp, which do not belong to PlanetLab. We Table 4 shows the time since the first node receives first executed the originating video server at katsuo.naist.jp, and byte of data until the last node receives first byte of data. one user node process on each other nodes. Due to constant The result is similar to the result of start-up delay measure- high processing load on PlanetLab nodes, it was difficult to ment. The times are less than 21 seconds, and thus these allocate processing power required for transcoding, and so values are practical. we did not performed transcoding on each node. Instead, we make each node just buffer data for one frame, and transmit it unprocessed. We believe that when our method is used for practical use, the processing power for transcod- Behavior in case of abrupt node departure ing is surely available. Table 5 shows the time to reestablish connection after node Table 2. Nodes used in experiments departure. The time to reconnect and restarting receiving data are the time since the departure node sends depar- katsuo.naist.jp pl1-higashi.ics.es.osaka-u.ac.jp ture request until its child node reestablishes connection to wakame.naist.jp planetlab3.netmedia.gist.ac.kr planet0.jaist.ac.jp planetlab1.netmedia.gist.ac.kr a new parent node, and the child node receives first byte planetlab-01.naist.jp planetlab2.iii.u-tokyo.ac.jp of data from the new parent node, respectively. We used planetlab-02.naist.jp planetlab1.iii.u-tokyo.ac.jp the cascading transcode tree topology which is same as the planetlab-03.naist.jp planetlab5.ie.cuhk.edu.hk last measurement, and made the node indicated in the ta- planetlab-04.naist.jp planetlab4.ie.cuhk.edu.hk planetlab1.tmit.bme.hu planetlab-02.ece.uprm.edu ble leave the system. The measured times are less than two planetlab2.tmit.bme.hu planetlab-01.ece.uprm.edu seconds, and users see video uninterruptedly in this period planetlab02.cnds.unibe.ch since video stream data is buffered in each node. Thus, the planetlab01.cnds.unibe.ch results are practical.
  6. 6. Table 5. Behavior in case of node departure departure node new parent node time to time to reconnect restart planetlab1.netmedia.gist.ac.kr thu1.6planetlab.edu.cn 1,471ms 1,805ms planetlab4.ie.cuhk.edu.hk planetlab1.netmedia.gist.ac.kr 288ms 411ms planetlab-01.ece.uprm.edu planetlab5.ie.cuhk.edu.hk 1,114ms 1,657ms planetlab1.tmit.bme.hu planetlab-02.ece.uprm.edu 1,383ms 1,626ms planetlab01.cnds.unibe.ch planetlab1.tmit.bme.hu 1,541ms 1,841ms planetlab2.iii.u-tokyo.ac.jp planet0.jaist.ac.jp 61ms 1,004ms Effectiveness of bandwidth saving method for backbone [2] S. Jacobs and A. Eleftheriadis: “Streaming Video links using Dynamic Rate Shaping and TCP Flow Con- trol,” Visual Communication and Image Representa- In order to evaluate the effectiveness of bandwidth saving tion Journal, 1998. (invited paper). method for backbone links described in 3.1, we compared the number of total hops in the transcode tree when the sys- [3] J. Liu, B. Li, and Y.-Q. Zhang: “An End-to-End tem chooses parent nodes for each node at random, using Adaptation Protocol for Layered Video Multicast Us- the method in 3.1, and using a method to choose nearest ing Optimal Rate Allocation,” IEEE Trans. on Multi- hop parent. In this experiment, we set the number of nodes media, Vol. 6, No. 1, 2004. per layer to be 4, and used cascaded transcode tree. We [4] B. Vickers, C. Albuquerque and T. Suda : “Source- make the system obtain route between nodes and available Adaptive Multilayered Multicast Algorithms for bandwidth between nodes using traceroute and Abing[11], Real-Time Video Distribution,” IEEE/ACM Trans. on respectively. Some routers between nodes did not respond Networking, Vol.8, No.6, pp.720-733, 2000. to ICMP Echo message, and these routers are treated as different routers for each route between nodes. Routes [5] H. Radha, M. Shaar, Y. Chen: “The MPEG-4 Fine- and available bandwidth are obtained before constructing Grained-Scalable video coding method for multime- transcode tree, and these are performed in parallel for each dia streaming over IP,” IEEE Trans. on Multimedia, route, and this took about 30 seconds. Table 6 shows the Vol. 3, No. 1, 2001. total hop counts in the transcode tree measured three times. Note that even if we used the method in 3.1, the resulting [6] S. Banerjee, C. Kommareddy, K. Kar, B. Bhattachar- transcode tree differs each time since the results of mea- jee and S. Khuller : “Construction of an Efficient surement for available bandwidth differs each time. The Overlay Multicast Infrastructure for Real-time Ap- results indicate that using the method in 3.1 leads to 16% plications,” Proc. of IEEE Inforcom 2003, pp.1521- fewer hop count, and choosing nearest hop parent leads to 1531, 2003. 41% fewer hop count. [7] V. Padmanabhan, H. Wang, P. Chow and K. Sripanid- Table 6. Effectiveness of backbone bandwidth saving kulchai : “Distributiong streaming media content us- ing cooperative networking,” Proc. of the 12th Int’l method # of hops Workshop on Network and Operating Systems Sup- random 343, 361, 335 port for Digital Audio and Video (NOSSDAV 2002), method in 3.1 314, 280, 277 pp.177-186, 2002. minimum hop count 204, 204, 204 [8] D. Liu, E. Setton, B. Shen and S. Chen : “PAT: Peer- Assisted Transcoding for Overlay Streaming to Het- erogeneous Devices,” Proc. of 17th Int’l Workshop on 5 Conclusion Network and Operating Systems Support for Digital Audio and Video (NOSSDAV 2007), pp.51-53, 2007. In this paper, we have implemented the algorithm and pro- [9] T. Sun, M. Tamai, K. Yasumoto, N. Shibata, M. tocols of MTcast proposed in [9] and showed the practical- Ito and M. Mori : “MTcast : Robust and Efficient ity and performance of MTcast through experiments made P2P-Based Video Delivery for Heterogeneous Users,” on PlanetLab. Proc. of the 9th Int’l. Conf. on Principles of Dis- In the future work, we will extend the algorithm of tributed Systems (OPODIS 2005), pp.176-190, 2005. MTcast to treat multi-dimensional quality requests includ- ing picture size, frame rate, and bit rate, and transcode each [10] PlanetLab: “An open platform for developing, de- quality factor independently of others. ploying, and accessing planetary-scale services,” https://www.planet-lab.org/. References [11] J.Navratil and R. L. Cottrell : “ABwE: A Practical Approach to Available Bandwidth Estimation,” [1] G. Conklin, G. Greenbaum, K. Lillevold, and A. http://www-iepm.slac.stanford.edu Lippman: “Video Coding for Streaming Media De- /tools/abing/ livery on the Internet,” IEEE Trans. on Circuits and Systems for Video Technology, Vol. 11, No. 3, 2001.