Cts05 Uyar Broker Network Presentation 2

316 views

Published on

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
316
On SlideShare
0
From Embeds
0
Number of Embeds
6
Actions
Shares
0
Downloads
2
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • Cts05 Uyar Broker Network Presentation 2

    1. 1. Investigating the Performance of Audio/Video Service Architecture II: Broker Network Ahmet Uyar & Geoffrey Fox Tuesday, May 17th, 2005 The 2005 International Symposium on Collaborative Technologies and Systems (CTS 2005) Saint Louis, Missouri, USA
    2. 2. Outline <ul><li>Introduction </li></ul><ul><li>NaradaBrokering Overview </li></ul><ul><li>Delivery priority for inter-broker traffic </li></ul><ul><li>Single meeting tests </li></ul><ul><li>Multiple Meeting Tests </li></ul><ul><li>Wide area tests </li></ul><ul><li>Conclusion </li></ul>
    3. 3. Introduction <ul><li>We investigate the performance and the capacity of the broker network in multiple broker settings. </li></ul><ul><li>We test the scalability of the broker network for both single large size meetings and multiple smaller size meetings. </li></ul><ul><li>We perform wide area tests to investigate the issues in real life videoconferencing settings over Internet. </li></ul><ul><li>The test results provide guidelines for the deployment and feasibility of GlobalMMCS videoconferencing system in particular and software based systems in general. </li></ul>
    4. 4. NaradaBrokering broker organization
    5. 5. Performance Tests <ul><li>We used an H.263 video stream with 280kbps bandwidth. </li></ul><ul><li>We used two 8 node Linux clusters to perform the tests. </li></ul><ul><li>At cluster1, each node had 2.4GHz Dual Intel Xeon CPUs, 2GB of memory and Linux 2.4.22 kernel. </li></ul><ul><li>At cluster2, each node had 2.8GHz Dual Intel Xeon CPUs, 2GB of memory and Linux 2.4.26 kernel. </li></ul>
    6. 6. Delivery Priority for Inter-broker Traffic <ul><li>We have given priority to inter-broker package delivery over local client deliveries. </li></ul><ul><li>This lets packages to travel many brokers with very little overhead. Therefore, the broker network can grow in size. </li></ul><ul><li>It eliminates cases where one overloaded broker severely affects the performance of other brokers. </li></ul><ul><li>It lets the load to be distributed among brokers in large size meetings. </li></ul>
    7. 7. Latencies with single and double queuing 16.1 16.2 16.1 16.1 Broker 2 (6 users) 20.2 24.5 20.2 15.8 Broker 1 400 users Avr. (ms) Last User (ms) Mid User (ms) First User (ms) Single Queue 1.5 1.6 1.5 1.4 Broker 2 (6 users) 20.5 24.9 20.5 16.1 Broker 1 400 users Avr. (ms) Last User (ms) Mid User (ms) First User (ms) Double Queue
    8. 8. Single Video Meeting Tests for Distributed Brokers <ul><li>There are equal number of participants in each broker. </li></ul><ul><li>We gather results from the last user from each broker. </li></ul>
    9. 9. Latencies from 4 brokers <ul><li>Broker1 and Broker2 have very similar latency values. </li></ul><ul><li>Broker3 and Broker4 have similar and slightly better latency values. </li></ul><ul><li>Going through multiple brokers does not introduce considerable overhead. </li></ul><ul><li>Scalability of the system can be increased almost linearly by adding new brokers. </li></ul>
    10. 10. Multiple Meeting Tests for Distributed Brokers <ul><li>The same setting as the single video meeting tests. However, all brokers were running at cluster 2. </li></ul><ul><li>The behavior of the broker network is more complex, since there are many stream deliveries among the brokers. </li></ul><ul><li>Having multiple meetings provide both opportunities and challenges. Conducting multiple concurrent meetings on the broker network can increase both the quality of the service provided and the number of supported users as long as the size of these meetings and the distribution of clients among brokers are managed appropriately. </li></ul><ul><li>The best broker utilization is achieved when there are multiple streams coming to a broker and each incoming stream is delivered to many receivers. If all brokers are utilized fully in this fashion, multi broker network provides better services to higher number of participants. </li></ul>
    11. 11. Multiple Video Meeting Tests <ul><li>4 brokers can support 48 meetings with 1920 users in total with excellent quality. </li></ul><ul><li>This number is higher than the single video meeting tests in which four brokers supported up to 1600 users. </li></ul><ul><li>When we repeated the same test with meeting size 20, 1400 participants can be supported with 70 meetings. </li></ul>Latency values and loss rates for meeting size 40 25.83 89.97 170.04 9.04 2400 60 10.59 14.62 8.46 3.93 1920 48 8.37 8.43 6.93 3.34 1600 40 Broker4 (ms) Broker3 (ms) Broker2 (ms) Broker1 (ms) Total users Number of Meetings 2.82 2.51 1.30 0.16 2400 60 0.50 0.50 0.29 0.12 1920 48 0.00 0.00 0.00 0.00 1600 40 Broker3 (%) Broker3 (%) Broker2 (%) Broker1 (%) Total users Number of Meetings
    12. 12. Wide-Area Media Delivery Tests <ul><li>We tested with five distant sites: </li></ul><ul><ul><li>Syracuse, NY </li></ul></ul><ul><ul><li>Tallahassee, Florida </li></ul></ul><ul><ul><li>Cardiff, UK </li></ul></ul><ul><ul><li>Two sites at Bloomington, IN </li></ul></ul><ul><li>We tested two cases: </li></ul><ul><ul><li>single broker at Indiana </li></ul></ul><ul><ul><li>one broker at each site </li></ul></ul>
    13. 13. Single Video Meeting Tests <ul><li>There is one broker at Indiana. </li></ul><ul><li>There is one video meeting. One user is publishing the video stream at IN. </li></ul><ul><li>There are equal number of participants at every site. </li></ul><ul><li>Latency values are the combination of transmission latency and routing overhead. </li></ul><ul><li>First row shows mainly the transmission latency, since the routing overhead is very small. </li></ul><ul><li>Transmission latency is very small for all sites. It does not increase significantly when more streams are transmitted. </li></ul>45 Mbps 86.98 47.6 49.8 33.04 600 150 30 Mbps 76.15 36.78 36.25 21.36 400 100 15 Mbps 65.55 24.42 23.94 10.84 200 50 1.2 Mbps 56.36 13.63 13.28 1.8 16 4 UK FL NY IN BW per site Latencies of last participants (ms) Total users Users per site
    14. 14. Summary of Wide-Area Tests <ul><li>Running brokers at distributed locations has many benefits: </li></ul><ul><ul><li>Saves bandwidth, and eliminates bandwidth limitations. </li></ul></ul><ul><ul><li>Transferring smaller number of streams yields better transmission services with smaller latency, jitter and loss rates. </li></ul></ul><ul><ul><li>Load is distributed to many brokers, more users can be served with better quality services. </li></ul></ul><ul><ul><li>End-to-end package latency can be reduced considerably by running brokers at geographically distant locations. </li></ul></ul><ul><li>The networks that we used provided excellent services with very small loss rates, latency and jitter values. </li></ul><ul><li>The network connections need to be checked for high quality. Some sites can not use their full capacity. </li></ul>
    15. 15. Conclusion <ul><li>Test results showed that the broker network can scale well for both single large size meetings and multiple smaller size meetings. </li></ul><ul><li>In large size meetings, the capacity of the broker network is increased with respect to the capacity of the added brokers. </li></ul><ul><li>In multiple smaller size meetings, the distribution of users among brokers are important. Inter-broker stream exchange can reduce the scalability. Few users should not be scattered around the broker network. </li></ul><ul><li>In wide area networks, this videoconferencing system provides many benefits with distributed broker architecture: bandwidth savings, latency savings, and better quality services. </li></ul><ul><li>In summary, thousands of concurrent users can easily be supported in distributed broker settings. </li></ul>
    16. 16. Questions…

    ×