Feb. 22, 2005 EuroIMSA 2005

  • 378 views
Uploaded on

 

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads

Views

Total Views
378
On Slideshare
0
From Embeds
0
Number of Embeds
0

Actions

Shares
Downloads
0
Comments
0
Likes
0

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide
  • Thank you Mr. chairman. Now I’m going to discuss our research / titled Low-delay and continuous P2P video streaming on structured networks.
  • This presentation is organized as follows. First, I introduce the [research background] / and I describe our objective. Next, I give [an explanation of our proposal], including three items, that is, scheduling algorithm, tree construction mechanism, and fault recovery mechanism. After that, I show [some results of simulation experiments]. Finally, I conclude this presentation.
  • Here I introduce [the background of our research]. Video streaming is [a promising application] that has been widely distributed. In a client-server model, as shown in the left figure, the server [[has to] distribute [video data] to all clients], which causes heavy load on the server. [P2P multicast technology], however, can help [to reduce the load of the server] by [peers relaying video data to other peers], as shown in the right figure. To achieve [P2P multicast technology], it is required [to construct multicast trees] to determine how video data are distributed.
  • How to construct “good” distribution trees? In many other papers, they measure [characteristics of the underlying physical network], for example, delay or bandwidth. However, that requires long time and causes [heavy load on the network]. In the case [of enterprise or university networks], measurements are not required, because they have [well-organized hierarchical structures], consisting [of branches and divisions, or departments]. By assuming such structured networks, [distribution trees] can be built easily and quickly.
  • Therefore, we propose [a scalable scheme of video streaming distribution] for [hierarchically constructed networks] based on [the organizational structure], for example, enterprises and universities. In this proposal we try [to reduce the load of servers and networks]↑, shorten [the initial waiting time] until the video starts playing↑, and reduce [the freeze time] caused by faults.
  • First, I give [an overview of our proposal]. This figure is [an example of an enterprise network]. There are three servers at the central office, named scheduleing server↑, tree server↑, and video server. The scheduling server manages [the schedules of peers and servers], and determines [ when distribution trees [have to] be constructed]. The tree server [manages distribution trees] and determines how to construct them. The video server is [the root of distribution trees] and [distributes video streams to the peers]. An enterprise has some branches, whose networks we call “branch”. And each branch has some divisions, whose networks we call “subnet”. For every branch, a peer is selected as [local tree server], and [inter-branch trees] are constructed among [local tree servers]. In the same way, for every subnet, a peer is selected as [subnet tree server], and [inter-subnet trees] are constructed among [subnet tree servers]. Other peers are called Normal Peer, NP, and inter-NP trees are constructed among normal peers.
  • Our proposal consists of three features. One is the scheduling algorithm [for reduction of the [initial waiting time]], another is the [tree construction mechanism] for construct distribution trees quickly , the other is the [fault recovery mechanism] for [ not freezing the video streams]. I explain these three ideas in this order.
  • For scheduling we use [existing pyramid broadcasting]. Please take a look at this top figure. A video data [is divided into segments ]. [The duration of segments] follows [the geometric progression] for [not freezing video streams]. Next, please look at this lowers figure. Each segment is [ repeatedly broadcasted on a dedicated channel] at twice the rate of play-rate. [Each segment duration] is called a slot . At the beginning of this scheme, [◎] a peer (who wants to watch a video stream) sends a “schedule request” to the [scheduling server]. [◎] [The scheduling server] [assigns slots to the peer] so that it receives segments one-by-one [from the beginning of the stream]. [◎] The peer receives segments [as the scheduling server informs], [◎] and plays, [◎] receives and [◎] plays, [◎] receives and [◎] plays [in this order].
  • As the second idea, I explain [about the tree construction mechanism]. Distribution trees are constructed segment-by-segment and slot-by-slot . I assume that the blue peers receive the first segment in this blue slot, and the red peers are in this red slot. With [the scheduling algorithm], [blue and red peers] also receive the second segment [in this yellow slot]. It means that they all are included in [the tree of the segment in the yellow slot], and the tree is constructed like this. Then I explain how these trees are constructed [by choosing one tree].
  • I explain [about the three types of trees] depending on their hierarchy. First, I show how the inter-branch trees are constructed. [After communicating with the scheduling server], [◎] each peer sends a “participation request” to the tree server. [◎] In each branch, the first peer sending the request is assigned as a local tree server , called LTS. [◎] The tree server [informs the IP address of the video server] to the LTS. [◎] The LTS establishes connection to the video server.
  • I assume that an LTS in the branch C tries to connect to the video server as well. However, It causes [a concentration of the load] on the video server, so the video server has [a limitation for the number of children], called “fanout number”. If the fanout number is two , the LTS in the branch C cannot connect to [the video server] because [the video server] already has two children. In this case, [◎] the video server [informs the IP address of one of its children] to the LTS. [◎] The LTS connects to the informed address. [◎] Peers, however, also have this limitation, that is, the fanout number. If the LTS C [cannot connect to the LTS B due to the limitation], the LTS C tries to connect to a child of LTS B. This means, peers recursively [step down the tree] and they definitely succeed [in joining the tree]. [The tree server] does [ not manage this process], which help [to reduce the load of the server]. [◎] Peers record addresses [they were informed in this process] as the “ ancestor list ”. [◎] For example, the LTS C has [IP addresses of [the video server and the LTS B]]. This information is used [in the fault recovery mechanism], I will explain later. [◎] Next, I explain how the inter-subnet trees are constructed.
  • Inter-subnet trees can be constructed, in [ almost the same manner as the inter-branch trees]. [◎] In each subnet, the first peer sending a participation request to [the tree server] is assigned [as a subnet tree server], called STS. [◎] The STS is informed of an IP address of not the video server but the LTS in this branch, and establishes connection to the LTS. [◎] In the same manner, an inter-subnet tree [is constructed [with considering the fanout number]]. At last, I explain how the inter-subnet trees are constructed.
  • There are several differences [between the 1st and later segments] about [the manner of constructing inter-NP trees]. [◎] At the 1st segment, a peer sending a participation request is assigned as NP , Normal peer, [◎] and is informed of the [IP address of the STS of this subnet, and connects to it. [◎] At later segments, a client [does not access the tree server] but directly accesses [the STS of previous segment] and establishes connection. An inter-NP tree is constructed as well, considering its fanout.
  • As the final issue, I discuss [the fault recovery mechanism]. Faults are caused [by many kinds of reasons and occasions], for example, [a user stops playing the video] or [[network components] such as routers and links] fail. This mechanism can work [in any of these situations]. I propose two types of recovering, [one is the recovery within a subtree], the other is recovery among different trees. I explain about [only recovery within subtrees] due to the limitation of the time. [◎] When a peer leaves this tree, [◎] these two peers [try to recover from fault]. These peers have [the information about their ancestors ], [◎] so one of them can connect to its grandparents, [◎] and finishes recovering. [◎] The other peer also [tries to connect to it], [◎] but the limitation, fanout number, prevents that. [◎] This peer [is informed of its grandparents’s children], [◎] and [tries to connect to it]. [◎] Finally recovery is completed. This mechanism [can work for all types of subtrees], such as inter-branch trees, inter-subnet trees and inter-NP trees.
  • [Our criteria for evaluation] are [the load of servers and peers], [the initial waiting time] and [the recovery and freeze time]. In our scenario, there are [ five branches in a network] and [ five subnets in each branch]. The [peer arrival rate] is set at 30 arrivals per second. It means [that the number of peers (participating in video distribution) is 2,790 (two thousand seven hundred and ninety) on average. Every peer leaves the service [with a probability of 0.004 at every second]. It means that only 47% can receive the whole video. I introduce [some of the results] we got from our simulation experiments.
  • This figure shows [the distribution of initial waiting time] that a peer experiences. [The initial waiting time] is the time beginning [ when a peer sends [a schedule request] to the scheduling server] and ending [ when it starts playing the video]. We can see that [the average waiting time] is 2.94 seconds, and [the maximum waiting time] is less than six seconds.
  • This figure depicts [the distribution of freeze time]. The freeze time is the total time [the video play-out is interrupted]. We can see [that [only 0.5 % of peers] experience freezes] and [the maximum freeze time] is 0.98 seconds, short enough for users to tolerate.
  • Finally I conclude our presentation. In this talk, we proposed a scheme for video streaming. Simulation verified [that our scheme provides users] with low-delay and continuous video streaming services. However, we also found [that [the load of the servers] is still rather high and we can improve our approach [by further reducing the load]. Therefore, we need an improved mechanism to distribute [the load efficiently]. [In addition] redundant flows can exist on inter-branch networks. A new algorithm (to build bandwidth-efficient, low-cost inter-branch distribution trees) is the topic of future work.
  • Now the presentation is completed. Thank you for your kind attention.

Transcript

  • 1. A Hybrid Video Streaming Scheme on Hierarchical P2P Networks * Shinsuke Suetsugu Naoki Wakamiya, Masayuki Murata Osaka University, Japan Koichi Konishi, Kunihiro Taniguchi NEC Corporation, Japan
  • 2. Overview
    • Research background
    • Objective
    • Proposed scheme
      • Scheduling algorithm
      • Tree construction mechanism
      • Fault recovery mechanism
    • Simulation experiments
    • Conclusion & future work
  • 3. Research Background (1/2)
    • Increased popularity of video streaming
      • streaming: to decode while downloading
      • distribution of movies or news
    • Development of P2P technology
      • distribute video streaming with P2P multicast
      • reduce load of servers
    Server Client Peer Server P2P multicast construct distribution tree Client-server model
  • 4. Research Background (2/2)
    • How to construct “good” distribution trees?
      • one way is to measure characteristics of physical networks, such as delay or bandwidth
        • requires long time
        • causes heavy load
      • enterprises or universities: well-organized networks are constructed
        • branch networks or division networks
        • no need to measure characteristics
        • construct distribution trees easily and quickly
  • 5. Objective
    • We propose a hybrid video streaming scheme for…
      • networks hierarchically constructed based on organization structure
      • networks with thousands or ten thousands of users
    • Objective of our proposal
      • reduce load of servers
      • reduce load of networks
        • avoid connecting between long-distance peers
        • avoid using a link repeatedly for a flow
      • reduce initial waiting time until video starts playing
      • reduce freeze time caused by faults
  • 6. Overview of Proposed Scheme Tree Server (GTS) Video Server (ORG) Scheduling Server (SS) Local Tree Server (LTS) Subnet Tree Server (STS) Normal Peer ( NP ) branch A branch C branch B subnet a subnet c subnet b Central office branch branch branch division division division
  • 7. Outline of Proposed Scheme
    • 1. Scheduling algorithm
      • use “pyramid broadcasting” ->   reduce initial waiting time until video starts playing
    • 2. Tree construction mechanism
      • construct P2P multicast trees ->   reduce load of servers
      • make networks hierarchical based on physical structure ->   reduce load of networks
    • 3. Fault recovery mechanism
      • recover from faults rapidly with only local communication ->   reduce freeze time caused by fault
  • 8. 1. Scheduling Algorithm - pyramid broadcasting - t t playout -> video data Scheduling Server ( SS ) Video Server ( ORG ) play-rate b transmission –rate 2b distribute continuously on different channels among each segments segment slot peer 1  :     2     :          4
  • 9. 2. Tree Construction Mechanism trees are constructed segment-by-segment and slot-by-slot
  • 10. 2. Tree Construction Mechanism - inter-branch tree - Tree Server ( GTS ) Video Server ( ORG ) inform IP address of Video Server participation request The first participation request in this branch branch A branch C branch B Central office LTS     LTS
  • 11. 2. Tree Construction Mechanism - inter-branch tree - Tree Server ( GTS ) Video Server ( ORG ) branch A branch C branch B limitation for number of children fanout = 2 list of ancestors: video server -> LTS B recursively attempts to connect to children on failure inform IP address of one of its children Peers store informed addresses as a list of ancestors Central office
  • 12. 2. Tree Construction Mechanism - inter-subnet tree - LTS Tree Server ( GTS ) Video Server ( ORG ) consider its fanout branch A subnet A subnet D subnet B subnet C inform IP address of LTS in this branch STS
  • 13. 2. Tree Construction Mechanism - inter-NP (Normal Peer) tree -
    • difference between 1st and later segments
    inform IP address of STS in this subnet 1st segment later segment access STS of previous segment directly without Tree Server Tree Server ( GTS ) consider its fanout subnet A NP STS
  • 14. 3. Fault Recovery Mechanism - recovery within subtree -
    • Fault:
      • peer leaves network
        • variety of reasons
        • different occasions
    • R ecovery within subtree
      • each peer has information about ancestors
      • mechanism can be applied in the same manner to:
        • inter-branch trees
        • inter-subnet trees
        • inter-NP trees
    consider fanout limitation connection request Inform address connection request connectionrequest recovery
  • 15. Simulation Experiments
    • Evaluation criteria
      • load of servers and peers
      • initial waiting time
      • recovery and freeze time
    • Scenario
      • 5 branches and 5 subnets in a branch
      • peer arrival process: uniform, average: 30 [arrivals/sec]
        • ->   average total number of peers: 2,790
      • transmission delay between server and peer: 200 [ms]
      • transmission delay between peers: 20 [ms]
      • fanout degree: 3
      • fault probability: 0.004
        • (= 47% for correctly receiving the whole video)
      • video duration: 186 [sec]
      • number (lengths) of segments: 5 ( 6, 12, 24, 48, 96 [sec] )
  • 16. Initial Waiting Time average waiting time: 2.94 sec maximum waiting time: < 6 sec initial waiting time [sec] cumulative probability
  • 17. Freeze Time video freezes for only 0.5 % of all thousands of peers maximum freeze time: < 1 sec freeze time [sec] cumulative probability
  • 18. Conclusion & Future Work
    • Proposal of video streaming distribution scheme
        • Scheduling algorithm
        • Tree construction mechanism
        • Fault recovery mechanism
      • Reduction of initial waiting time and freeze time
    • Extensions
      • Further reduction of load on the tree server
      • Optimization of tree structures to eliminate long-distance redundant links
  • 19.
    • Thank you.