Feb. 22, 2005 EuroIMSA 2005


Published on

  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide
  • Thank you Mr. chairman. Now I’m going to discuss our research / titled Low-delay and continuous P2P video streaming on structured networks.
  • This presentation is organized as follows. First, I introduce the [research background] / and I describe our objective. Next, I give [an explanation of our proposal], including three items, that is, scheduling algorithm, tree construction mechanism, and fault recovery mechanism. After that, I show [some results of simulation experiments]. Finally, I conclude this presentation.
  • Here I introduce [the background of our research]. Video streaming is [a promising application] that has been widely distributed. In a client-server model, as shown in the left figure, the server [[has to] distribute [video data] to all clients], which causes heavy load on the server. [P2P multicast technology], however, can help [to reduce the load of the server] by [peers relaying video data to other peers], as shown in the right figure. To achieve [P2P multicast technology], it is required [to construct multicast trees] to determine how video data are distributed.
  • How to construct “good” distribution trees? In many other papers, they measure [characteristics of the underlying physical network], for example, delay or bandwidth. However, that requires long time and causes [heavy load on the network]. In the case [of enterprise or university networks], measurements are not required, because they have [well-organized hierarchical structures], consisting [of branches and divisions, or departments]. By assuming such structured networks, [distribution trees] can be built easily and quickly.
  • Therefore, we propose [a scalable scheme of video streaming distribution] for [hierarchically constructed networks] based on [the organizational structure], for example, enterprises and universities. In this proposal we try [to reduce the load of servers and networks]↑, shorten [the initial waiting time] until the video starts playing↑, and reduce [the freeze time] caused by faults.
  • First, I give [an overview of our proposal]. This figure is [an example of an enterprise network]. There are three servers at the central office, named scheduleing server↑, tree server↑, and video server. The scheduling server manages [the schedules of peers and servers], and determines [ when distribution trees [have to] be constructed]. The tree server [manages distribution trees] and determines how to construct them. The video server is [the root of distribution trees] and [distributes video streams to the peers]. An enterprise has some branches, whose networks we call “branch”. And each branch has some divisions, whose networks we call “subnet”. For every branch, a peer is selected as [local tree server], and [inter-branch trees] are constructed among [local tree servers]. In the same way, for every subnet, a peer is selected as [subnet tree server], and [inter-subnet trees] are constructed among [subnet tree servers]. Other peers are called Normal Peer, NP, and inter-NP trees are constructed among normal peers.
  • Our proposal consists of three features. One is the scheduling algorithm [for reduction of the [initial waiting time]], another is the [tree construction mechanism] for construct distribution trees quickly , the other is the [fault recovery mechanism] for [ not freezing the video streams]. I explain these three ideas in this order.
  • For scheduling we use [existing pyramid broadcasting]. Please take a look at this top figure. A video data [is divided into segments ]. [The duration of segments] follows [the geometric progression] for [not freezing video streams]. Next, please look at this lowers figure. Each segment is [ repeatedly broadcasted on a dedicated channel] at twice the rate of play-rate. [Each segment duration] is called a slot . At the beginning of this scheme, [◎] a peer (who wants to watch a video stream) sends a “schedule request” to the [scheduling server]. [◎] [The scheduling server] [assigns slots to the peer] so that it receives segments one-by-one [from the beginning of the stream]. [◎] The peer receives segments [as the scheduling server informs], [◎] and plays, [◎] receives and [◎] plays, [◎] receives and [◎] plays [in this order].
  • As the second idea, I explain [about the tree construction mechanism]. Distribution trees are constructed segment-by-segment and slot-by-slot . I assume that the blue peers receive the first segment in this blue slot, and the red peers are in this red slot. With [the scheduling algorithm], [blue and red peers] also receive the second segment [in this yellow slot]. It means that they all are included in [the tree of the segment in the yellow slot], and the tree is constructed like this. Then I explain how these trees are constructed [by choosing one tree].
  • I explain [about the three types of trees] depending on their hierarchy. First, I show how the inter-branch trees are constructed. [After communicating with the scheduling server], [◎] each peer sends a “participation request” to the tree server. [◎] In each branch, the first peer sending the request is assigned as a local tree server , called LTS. [◎] The tree server [informs the IP address of the video server] to the LTS. [◎] The LTS establishes connection to the video server.
  • I assume that an LTS in the branch C tries to connect to the video server as well. However, It causes [a concentration of the load] on the video server, so the video server has [a limitation for the number of children], called “fanout number”. If the fanout number is two , the LTS in the branch C cannot connect to [the video server] because [the video server] already has two children. In this case, [◎] the video server [informs the IP address of one of its children] to the LTS. [◎] The LTS connects to the informed address. [◎] Peers, however, also have this limitation, that is, the fanout number. If the LTS C [cannot connect to the LTS B due to the limitation], the LTS C tries to connect to a child of LTS B. This means, peers recursively [step down the tree] and they definitely succeed [in joining the tree]. [The tree server] does [ not manage this process], which help [to reduce the load of the server]. [◎] Peers record addresses [they were informed in this process] as the “ ancestor list ”. [◎] For example, the LTS C has [IP addresses of [the video server and the LTS B]]. This information is used [in the fault recovery mechanism], I will explain later. [◎] Next, I explain how the inter-subnet trees are constructed.
  • Inter-subnet trees can be constructed, in [ almost the same manner as the inter-branch trees]. [◎] In each subnet, the first peer sending a participation request to [the tree server] is assigned [as a subnet tree server], called STS. [◎] The STS is informed of an IP address of not the video server but the LTS in this branch, and establishes connection to the LTS. [◎] In the same manner, an inter-subnet tree [is constructed [with considering the fanout number]]. At last, I explain how the inter-subnet trees are constructed.
  • There are several differences [between the 1st and later segments] about [the manner of constructing inter-NP trees]. [◎] At the 1st segment, a peer sending a participation request is assigned as NP , Normal peer, [◎] and is informed of the [IP address of the STS of this subnet, and connects to it. [◎] At later segments, a client [does not access the tree server] but directly accesses [the STS of previous segment] and establishes connection. An inter-NP tree is constructed as well, considering its fanout.
  • As the final issue, I discuss [the fault recovery mechanism]. Faults are caused [by many kinds of reasons and occasions], for example, [a user stops playing the video] or [[network components] such as routers and links] fail. This mechanism can work [in any of these situations]. I propose two types of recovering, [one is the recovery within a subtree], the other is recovery among different trees. I explain about [only recovery within subtrees] due to the limitation of the time. [◎] When a peer leaves this tree, [◎] these two peers [try to recover from fault]. These peers have [the information about their ancestors ], [◎] so one of them can connect to its grandparents, [◎] and finishes recovering. [◎] The other peer also [tries to connect to it], [◎] but the limitation, fanout number, prevents that. [◎] This peer [is informed of its grandparents’s children], [◎] and [tries to connect to it]. [◎] Finally recovery is completed. This mechanism [can work for all types of subtrees], such as inter-branch trees, inter-subnet trees and inter-NP trees.
  • [Our criteria for evaluation] are [the load of servers and peers], [the initial waiting time] and [the recovery and freeze time]. In our scenario, there are [ five branches in a network] and [ five subnets in each branch]. The [peer arrival rate] is set at 30 arrivals per second. It means [that the number of peers (participating in video distribution) is 2,790 (two thousand seven hundred and ninety) on average. Every peer leaves the service [with a probability of 0.004 at every second]. It means that only 47% can receive the whole video. I introduce [some of the results] we got from our simulation experiments.
  • This figure shows [the distribution of initial waiting time] that a peer experiences. [The initial waiting time] is the time beginning [ when a peer sends [a schedule request] to the scheduling server] and ending [ when it starts playing the video]. We can see that [the average waiting time] is 2.94 seconds, and [the maximum waiting time] is less than six seconds.
  • This figure depicts [the distribution of freeze time]. The freeze time is the total time [the video play-out is interrupted]. We can see [that [only 0.5 % of peers] experience freezes] and [the maximum freeze time] is 0.98 seconds, short enough for users to tolerate.
  • Finally I conclude our presentation. In this talk, we proposed a scheme for video streaming. Simulation verified [that our scheme provides users] with low-delay and continuous video streaming services. However, we also found [that [the load of the servers] is still rather high and we can improve our approach [by further reducing the load]. Therefore, we need an improved mechanism to distribute [the load efficiently]. [In addition] redundant flows can exist on inter-branch networks. A new algorithm (to build bandwidth-efficient, low-cost inter-branch distribution trees) is the topic of future work.
  • Now the presentation is completed. Thank you for your kind attention.
  • Feb. 22, 2005 EuroIMSA 2005

    1. 1. A Hybrid Video Streaming Scheme on Hierarchical P2P Networks * Shinsuke Suetsugu Naoki Wakamiya, Masayuki Murata Osaka University, Japan Koichi Konishi, Kunihiro Taniguchi NEC Corporation, Japan
    2. 2. Overview <ul><li>Research background </li></ul><ul><li>Objective </li></ul><ul><li>Proposed scheme </li></ul><ul><ul><li>Scheduling algorithm </li></ul></ul><ul><ul><li>Tree construction mechanism </li></ul></ul><ul><ul><li>Fault recovery mechanism </li></ul></ul><ul><li>Simulation experiments </li></ul><ul><li>Conclusion & future work </li></ul>
    3. 3. Research Background (1/2) <ul><li>Increased popularity of video streaming </li></ul><ul><ul><li>streaming: to decode while downloading </li></ul></ul><ul><ul><li>distribution of movies or news </li></ul></ul><ul><li>Development of P2P technology </li></ul><ul><ul><li>distribute video streaming with P2P multicast </li></ul></ul><ul><ul><li>reduce load of servers </li></ul></ul>Server Client Peer Server P2P multicast construct distribution tree Client-server model
    4. 4. Research Background (2/2) <ul><li>How to construct “good” distribution trees? </li></ul><ul><ul><li>one way is to measure characteristics of physical networks, such as delay or bandwidth </li></ul></ul><ul><ul><ul><li>requires long time </li></ul></ul></ul><ul><ul><ul><li>causes heavy load </li></ul></ul></ul><ul><ul><li>enterprises or universities: well-organized networks are constructed </li></ul></ul><ul><ul><ul><li>branch networks or division networks </li></ul></ul></ul><ul><ul><ul><li>no need to measure characteristics </li></ul></ul></ul><ul><ul><ul><li>construct distribution trees easily and quickly </li></ul></ul></ul>
    5. 5. Objective <ul><li>We propose a hybrid video streaming scheme for… </li></ul><ul><ul><li>networks hierarchically constructed based on organization structure </li></ul></ul><ul><ul><li>networks with thousands or ten thousands of users </li></ul></ul><ul><li>Objective of our proposal </li></ul><ul><ul><li>reduce load of servers </li></ul></ul><ul><ul><li>reduce load of networks </li></ul></ul><ul><ul><ul><li>avoid connecting between long-distance peers </li></ul></ul></ul><ul><ul><ul><li>avoid using a link repeatedly for a flow </li></ul></ul></ul><ul><ul><li>reduce initial waiting time until video starts playing </li></ul></ul><ul><ul><li>reduce freeze time caused by faults </li></ul></ul>
    6. 6. Overview of Proposed Scheme Tree Server (GTS) Video Server (ORG) Scheduling Server (SS) Local Tree Server (LTS) Subnet Tree Server (STS) Normal Peer ( NP ) branch A branch C branch B subnet a subnet c subnet b Central office branch branch branch division division division
    7. 7. Outline of Proposed Scheme <ul><li>1. Scheduling algorithm </li></ul><ul><ul><li>use “pyramid broadcasting” ->   reduce initial waiting time until video starts playing </li></ul></ul><ul><li>2. Tree construction mechanism </li></ul><ul><ul><li>construct P2P multicast trees ->   reduce load of servers </li></ul></ul><ul><ul><li>make networks hierarchical based on physical structure ->   reduce load of networks </li></ul></ul><ul><li>3. Fault recovery mechanism </li></ul><ul><ul><li>recover from faults rapidly with only local communication ->   reduce freeze time caused by fault </li></ul></ul>
    8. 8. 1. Scheduling Algorithm - pyramid broadcasting - t t playout -> video data Scheduling Server ( SS ) Video Server ( ORG ) play-rate b transmission –rate 2b distribute continuously on different channels among each segments segment slot peer 1  :     2     :          4
    9. 9. 2. Tree Construction Mechanism trees are constructed segment-by-segment and slot-by-slot
    10. 10. 2. Tree Construction Mechanism - inter-branch tree - Tree Server ( GTS ) Video Server ( ORG ) inform IP address of Video Server participation request The first participation request in this branch branch A branch C branch B Central office LTS     LTS
    11. 11. 2. Tree Construction Mechanism - inter-branch tree - Tree Server ( GTS ) Video Server ( ORG ) branch A branch C branch B limitation for number of children fanout = 2 list of ancestors: video server -> LTS B recursively attempts to connect to children on failure inform IP address of one of its children Peers store informed addresses as a list of ancestors Central office
    12. 12. 2. Tree Construction Mechanism - inter-subnet tree - LTS Tree Server ( GTS ) Video Server ( ORG ) consider its fanout branch A subnet A subnet D subnet B subnet C inform IP address of LTS in this branch STS
    13. 13. 2. Tree Construction Mechanism - inter-NP (Normal Peer) tree - <ul><li>difference between 1st and later segments </li></ul>inform IP address of STS in this subnet 1st segment later segment access STS of previous segment directly without Tree Server Tree Server ( GTS ) consider its fanout subnet A NP STS
    14. 14. 3. Fault Recovery Mechanism - recovery within subtree - <ul><li>Fault: </li></ul><ul><ul><li>peer leaves network </li></ul></ul><ul><ul><ul><li>variety of reasons </li></ul></ul></ul><ul><ul><ul><li>different occasions </li></ul></ul></ul><ul><li>R ecovery within subtree </li></ul><ul><ul><li>each peer has information about ancestors </li></ul></ul><ul><ul><li>mechanism can be applied in the same manner to: </li></ul></ul><ul><ul><ul><li>inter-branch trees </li></ul></ul></ul><ul><ul><ul><li>inter-subnet trees </li></ul></ul></ul><ul><ul><ul><li>inter-NP trees </li></ul></ul></ul>consider fanout limitation connection request Inform address connection request connectionrequest recovery
    15. 15. Simulation Experiments <ul><li>Evaluation criteria </li></ul><ul><ul><li>load of servers and peers </li></ul></ul><ul><ul><li>initial waiting time </li></ul></ul><ul><ul><li>recovery and freeze time </li></ul></ul><ul><li>Scenario </li></ul><ul><ul><li>5 branches and 5 subnets in a branch </li></ul></ul><ul><ul><li>peer arrival process: uniform, average: 30 [arrivals/sec] </li></ul></ul><ul><ul><ul><li>->   average total number of peers: 2,790 </li></ul></ul></ul><ul><ul><li>transmission delay between server and peer: 200 [ms] </li></ul></ul><ul><ul><li>transmission delay between peers: 20 [ms] </li></ul></ul><ul><ul><li>fanout degree: 3 </li></ul></ul><ul><ul><li>fault probability: 0.004 </li></ul></ul><ul><ul><ul><li>(= 47% for correctly receiving the whole video) </li></ul></ul></ul><ul><ul><li>video duration: 186 [sec] </li></ul></ul><ul><ul><li>number (lengths) of segments: 5 ( 6, 12, 24, 48, 96 [sec] ) </li></ul></ul>
    16. 16. Initial Waiting Time average waiting time: 2.94 sec maximum waiting time: < 6 sec initial waiting time [sec] cumulative probability
    17. 17. Freeze Time video freezes for only 0.5 % of all thousands of peers maximum freeze time: < 1 sec freeze time [sec] cumulative probability
    18. 18. Conclusion & Future Work <ul><li>Proposal of video streaming distribution scheme </li></ul><ul><ul><ul><li>Scheduling algorithm </li></ul></ul></ul><ul><ul><ul><li>Tree construction mechanism </li></ul></ul></ul><ul><ul><ul><li>Fault recovery mechanism </li></ul></ul></ul><ul><ul><li>Reduction of initial waiting time and freeze time </li></ul></ul><ul><li>Extensions </li></ul><ul><ul><li>Further reduction of load on the tree server </li></ul></ul><ul><ul><li>Optimization of tree structures to eliminate long-distance redundant links </li></ul></ul>
    19. 19. <ul><li>Thank you. </li></ul>