• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Scalable Service Oriented Architecture for Audio/Video ...
 

Scalable Service Oriented Architecture for Audio/Video ...

on

  • 336 views

 

Statistics

Views

Total Views
336
Views on SlideShare
336
Embed Views
0

Actions

Likes
0
Downloads
2
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • Each audio package is independent of others. Therefore, each package in the audio stream takes almost the same amount of time to route. This results in very small amount of jitter. In addition, the latency values for the first participant is almost always the same independent of the number of participants in the meeting.
  • Since there are multiple video packages in a frame, upcoming packages wait the earlier ones in the frame. Therefore, even the latency values of the first participant increases as the number of participants increase in the meeting. Similarly, the jitter increases as the number of participants increase in the meeting. One broker can support at most 400 participants. Although the broker is saturated when there are 1000 participants.
  • Each meeting has 20 participants and one transmitter. Results are gathered from 10 meetings when there are more than 10 meetings. Average latency and jitter is much smaller than single video meeting test results. There is no late arriving packages until 700 participants. Therefore, 35 meetings with 700 participants can be supported by one broker.
  • There is one video meeting. There are equal number of participants in each broker. We gather results from first and last user from each broker.

Scalable Service Oriented Architecture for Audio/Video ... Scalable Service Oriented Architecture for Audio/Video ... Presentation Transcript

  • Scalable Service Oriented Architecture for Audio/Video Conferencing By Ahmet Uyar Wednesday, March 23, 2005
  • Outline
    • Research Issues
    • Criteria for videoconferencing systems
    • Overview of current videoconferencing systems
    • Overview of GlobalMMCS architecture
    • NaradaBrokering overview and additions
    • Performance tests for audio/video delivery
    • Service oriented architecture for videoconferencing
    • Conclusion
  • Research Issues
    • We investigate the question of how to develop scalable and universally accessible videoconferencing systems over Internet.
    • We propose using publish/subscribe event broker systems for the distribution of real-time audio and video streams in videoconferencing sessions and investigate the issues pertaining to scalability, performance, data representation, meeting management and media processing services.
    • Since real-time audio/video delivery requires low latency and high bandwidth, we investigate the performance and the scalability of this software based messaging middleware extensively.
    • We propose service oriented architecture for videoconferencing. We identify the tasks performed in videoconferencing sessions and provide independently scalable components for each task. We identified three main tasks in videoconferencing sessions:
      • audio/video distribution
      • media processing
      • meeting management.
  • Criteria for Videoconferencing Systems
    • We identified the following criteria for videoconferencing systems:
      • Scalability
      • Security
      • Traversing through firewalls, proxies and NAT
      • Supporting heterogeneous clients
      • Easy to develop, maintain and use
      • Support for data conferencing
  • Videoconferencing Standards and Systems
    • Multicast based systems
      • AccessGrid
    • H.323 based systems
      • Polycom
      • CUseeMe
    • VRVS
  • Multicast Based Systems
    • AccessGrid is the most commonly used room based videoconferencing system for group communications.
    • Scales well.
    • Difficult to provide security services. No authority to manage multicast IP numbers. Vulnerable to denial-of-service attacks.
    • No support for going through firewalls and proxies.
    • Low end users can not join meetings. No media processing is provided.
    • Easy to use and understand.
    • Third party data conferencing applications can be used.
  • H.323 Based Systems
    • There are many companies that provide H.323 based videoconferencing systems. Polycom, FVC, etc.
    • Does not scale well.
    • H.235 defines the security mechanisms but most H.323 based systems do not implement it yet.
    • H.323 based systems are not firewall friendly. It requires almost all ports to be open.
    • Limited number of heterogeneous clients can be supported.
    • Not very easy to understand and develop services.
    • T.120 define data conferencing: whiteboard sharing, file transfer and application sharing.
  • H.323 Centralized Multipoint Conferencing H.323 Decentralized Multipoint Conferencing H.323 MCU cascading architecture
  • VRVS (Virtual Rooms Videoconferencing System)
    • Uses software reflectors to distribute audio/video streams.
    • Not open source. No details available.
    • Can go through firewalls, NATs and proxies.
  • GlobalMMCS Overview
    • Videoconferencing Tasks:
      • Audio/Video Distribution
      • Media processing
      • Meeting management
  • Evaluation of GlobalMMCS
    • Scalability: Provides scalability by separating media processing from media delivery.
    • Security: NB provides all security services. It also takes precautions against denial of service and replay attacks.
    • Traversing through firewalls, proxies and NAT
    • Supporting heterogeneous clients: Since we provide a scalable media processing framework and many transport protocols, we can support a diverse set of end points.
    • Easy to develop, maintain and use
    • Support for data conferencing
  • Media Distribution Middleware (NaradaBrokering)
    • Requirements for Media Delivery
      • High Bandwidth
      • Low latency
      • Tolerate Package Loss
    • NaradaBrokering
      • NB organizes brokers in a hierarchical cluster-based architecture.
      • NB supports dynamic broker and link additions/removals.
      • Messages are routed only to those brokers that have at least one subscription
      • NB has a flexible transport mechanism
      • NB is JMS compliant and supports reliable message delivery.
      • NB provides performance monitoring service
  • NaradaBrokering broker organization
  • Incorporating Support for Audio/Video Delivery in NaradaBrokering
    • Adding support for an unreliable transport protocol, UDP
    • Implementing a distributed topic number generation mechanism
    • Designing a Unique ID Generation Mechanism
    • Designing a new compact event
    • Adding support for legacy RTP clients
    • Some improvements in the routing algorithm
  • Implementing Distributed Topic Number Generation Mechanism
    • The requirements for topic number generation:
      • spatial independence of a topic generator
      • temporal independence of a topic generator
      • Acceptable size
    • One topic number generator runs in every broker
    • 20 bytes topic generator id guarantees spatial independence
      • 2 20 = 1,048,576 topic number generators
    • 44 bytes timestamp provides temporal independence
      • 2 44 =17592186044416 distinct timestamp values
      • 557 years with one millisecond resolution.
    • UUID solves a similar problem with 16 bytes
    • This mechanism can also be used to generate unique ids.
  • Designing a New Event
    • In publish/subscribe messaging systems, messages tend to have many headers.
    • A message in JMS API has at least 10 headers. These headers take around 200 bytes when they are serialized to transfer over the network.
    • A ULAW audio package for 20 ms has a size of 172 bytes and entails 64 kbps network bandwidth. Padding an extra 200 bytes of header to each audio package results in the bandwidth requirement of 148 kbps.
    • It is also more costly to serialize/de-serialize more headers.
    • RTPEvent has four headers and 14 bytes long.
      • Event and Media headers are 1 byte each
      • Topic Name is 8 bytes
      • Source info is 4 bytes.
  • Supporting Legacy RTP Clients
    • RTPLinks receive raw RTP packages over UDP or Multicast from legacy systems, wrap them in RTPEvents and propagate them through the broker node. It also receives RTPEvents from the broker node and sent them as raw RTP packages to clients.
    • Each RTPLink starts two sockets: one for RTP and the other for RTCP. Similarly, it subscribes to two topics: one for RTP and the other for RTCP.
    • Some RTP sessions might have more than one media stream, in that case, each stream might be published to a different topic.
    • RTPLinks can either be managed by statically of dynamically.
  • Performance Tests of NaradaBrokering
    • The Characteristics of Audio and Video Streams
    • Quality Assessment of Media Delivery
    • Performance Tests for One Broker
    • Performance Tests for Distributed Brokers
    • Wide-Area Media Delivery Tests
  • Characteristics of Audio and Video Streams
    • Audio streams are composed of fixed size packages with regular intervals.
    • We chose 64 kbps ULAW audio stream to be used in the tests:
      • One audio package is sent every 30ms. Each audio package is 252 bytes.
      • There are 4100 packages in total, during 2 min.
    • Video codecs also encode frames periodically. However, each frame may have multiple video packages. Full picture update frames have much more packages.
    • We chose H.263 video format, avrg. bandwidth 280kbps, for 2 min:
      • 15 frames are encoded every second. One frame every 66ms.
      • 1800 frames and 5610 packages in total. On avrg. 3.1 packages per frame.
      • One full picture update every 60 frames or 4 seconds.
  • Quality Assessment of Media Delivery
    • There are three important factors: latency, jitter and package loss
    • ITU recommends that the mouth-to-ear latency of audio should be
      • Less than 400ms for acceptable quality
      • Less than 300ms for good quality
      • Less than 150ms for excellent quality.
    • The total latency is the combination of:
      • Processing at sender and receiver
      • Transmission latency
      • Routing latency by the broker network
    • We limit the routing latency to 100ms at most.
    • The packages that take more than 100ms are labeled as late arrivals .
    • We limit the jitter caused by routing to 10ms
    • We limit the loss rate to 1.0%
  • Performance Tests for One Broker
    • Single Meeting Tests
      • Single audio meeting tests
      • Single video meeting tests
      • Audio + Video meeting tests
    • Multiple Meeting Tests
      • Multiple audio meeting tests
      • Multiple video meeting tests
      • Multiple Audio + Video meeting tests
  • Single Meeting Tests
    • One transmitter and 12 measuring receivers. Other receivers are passive.
    • Tests are conducted in a Linux cluster with 8 identical machines. These machines had Double Intel Xeon 2.4GHz CPUs, 2GB of memory with Linux 2.4.22 kernel. All programs are written in Java. There is gigabit connection among the cluster nodes.
  • Single Audio Meeting Tests I 100 1.2 2275 2290 2260 1600 0.25 0.44 17.8 32.3 3.3 1500 0 0.26 13.5 26.5 0.5 1400 0 0.22 11.6 22.6 0.5 1200 0 0.18 8 15.5 0.5 800 0 0.21 4.2 7.9 0.5 400 0 0.18 2.3 4.1 0.5 200 0 0.15 1.4 2.3 0.5 100 0 0.18 0.6 0.7 0.5 12 LA(av) (%) J(av) (ms) L(av) (ms) L(N) (ms) L(1) (ms) Number Of Clients
  • Single Audio Meeting Tests II
    • The latency of first user is constant and does not depend on the number of users in a meeting
    • Each audio package is independent of others. The routing of each package is completed before the next one arrives. All audio packages in the audio stream takes almost the same amount of time to arrive to a client.
    • The broker saturates when the latency of the last user is more than 30ms.
    • 1500 users can be supported in an audio meeting
  • Broker saturation in single audio meeting
    • Latency values for the middle user in single audio meeting with 1600 participants.
  • Single Video Meeting Tests I 99 27.8 1609 1619 1599 1000 40.8 23.8 102.7 111.7 93.7 900 17.6 21.3 53.6 61.6 45.6 800 8.4 18.1 36.8 43.7 29.8 700 5.1 15.5 28.6 34.5 22.6 600 3.0 13.2 23.4 28.5 18.2 500 0.75 10.1 17.3 21.2 13.4 400 0 7.8 13.2 16.2 10.2 300 0 4.7 8.3 10.2 6.3 200 0 2 4 5 3.1 100 0 0.44 1.2 1.3 1 12 LA(N) (%) J(av) (ms) L(av) (ms) L(N) (ms) L(1) (ms) Number Of Clients
  • Single Video Meeting Tests II
    • Latency values for the last receiver in single video meeting with 400 participants.
    • Peaks correspond to full picture update frames.
    • One broker can support at most 400 participants because of late arriving packages. Although the broker is saturated when there are 1000 participants.
    • The main reason for the late arriving packages are the full picture updates.
  • Audio and Video Combined Meeting Tests
    • Each one affects the other.
    • Our initial tests showed that the impact of video meeting on the performance of an audio meeting is significant. Therefore, we gave priority to audio routing at the broker.
    • There are two queues at the broker: audio and non-audio. If an audio package arrives, it is routed first as long as the routing of the currently routed package is over.
    • When there are 600 participants, there is only 5ms difference. Therefore, the impact of the video meeting is not significant on the performance of the audio meeting
  • Comparison of single video meetings and audio + video meetings
    • This test shows that the impact of an audio meeting on the performance of a video meeting is not significant.
    • In audio and video combined meetings, the broker supports almost the same number of participants as in the case of single video meetings. The main reason for this is the better utilization of broker resources when there are two concurrent meetings.
  • Multiple Video Meeting Tests % 98 3.3 2787 45 900 % 19 1.8 81.1 40 800 % 0.7 1.52 10.6 35 700 0 1.37 6.8 30 600 0 1.3 5.94 25 500 0 1.1 4.54 20 400 0 0.86 3.17 15 300 0 0.85 2.74 10 200 0 0.68 2.25 5 100 LA(av) J(av) (ms) L(av) (ms) # of Meet ings Total users
  • Latency values for each video package when there are 30 meetings with 600 participants.
    • This graph shows that there are no peaks in latency values for full picture update frames as it was the case in the single video meeting case.
  • Summary of Single Broker Tests
    • 1500 participants are supported in one audio meeting
    • 400 participants are supported in one video meeting
    • Up to 400 audio participants and 400 video participants are supported in audio + video meetings.
    • 700 participants can be supported in 35 video meetings each having 20 participants
    • 1300 participants can be supported in 65 audio meetings each having 20 participants
    • 20 audio and 20 video meetings can be supported each having 20 participants.
  • Performance Tests for Distributed Brokers
    • We have given priority to inter-broker package delivery over local client deliveries.
    • This lets packages to travel many brokers with very little overhead. It lets the broker network to scale.
    • It also eliminates cases where one overloaded broker severely affects the performance of other brokers.
  • Test results with single and double queuing 16.1 16.2 16.1 16.1 Broker 2 (6 users) 20.2 24.5 20.2 15.8 Broker 1 400 users Avr. (ms) Last User (ms) Mid User (ms) First User (ms) 1.5 1.6 1.5 1.4 Broker 2 (6 users) 20.5 24.9 20.5 16.1 Broker 1 400 users Avr. (ms) Last User (ms) Mid User (ms) First User (ms)
  • Single Video Meeting Tests for Distributed Brokers
    • There are equal number of participants in each broker.
    • We gather results from first and last user from each broker.
  • Latencies from 4 brokers
    • Broker1 and Broker2 have very similar latency values.
    • Broker3 and Broker4 have similar and slightly better latency values.
    • Going through multiple brokers does not introduce considerable overhead.
    • Scalability of the system can be increased almost linearly by adding new brokers.
  • Multiple Meeting Tests for Distributed Brokers
    • The same setting as the single video meeting tests. However, all broker were running at cluster 2.
    • The behavior of the broker network is more complex when there are multiple concurrent meetings compared to having a single meeting.
    • Having multiple meetings provide both opportunities and challenges. Conducting multiple concurrent meetings on the broker network can increase both the quality of the service provided and the number of supported users as long as the size of these meetings and the distribution of clients among brokers are managed appropriately.
    • The best broker utilization is achieved when there are multiple streams coming to a broker and each incoming stream is delivered to many receivers. If all brokers are utilized fully in this fashion, multi broker network provides better services to higher number of participants.
  • Multiple Video Meeting Tests
    • 4 brokers can support 48 meetings with 1920 users in total with excellent quality.
    • This number is higher than the single video meeting tests in which four brokers supported up to 1600 users.
    • When we repeated the same test with meeting size 20, 1400 participants can be supported.
    Latency values and loss rates for meeting size 40 25.83 89.97 170.04 9.04 2400 60 10.59 14.62 8.46 3.93 1920 48 8.37 8.43 6.93 3.34 1600 40 Broker4 (ms) Broker3 (ms) Broker2 (ms) Broker1 (ms) Total users Number of Meetings 2.82 2.51 1.30 0.16 2400 60 0.50 0.50 0.29 0.12 1920 48 0.00 0.00 0.00 0.00 1600 40 Broker3 (%) Broker3 (%) Broker2 (%) Broker1 (%) Total users Number of Meetings
  • Wide-Area Media Delivery Tests
    • We tested with five distant sites:
      • Syracuse, NY,
      • Tallahassee, Florida,
      • Cardiff, UK
      • Two sites at Bloomington, IN
    • We tested two cases:
      • single broker at Indiana
      • one broker at each site
  • Summary of Wide-Area Tests
    • Running brokers at distributed locations has many benefits:
      • Saves bandwidth, and eliminates bandwidth limitations.
      • Transferring smaller number of streams yields better transmission services with smaller latency, jitter and loss rates.
      • Load is distributed to many brokers, more users can be served with better quality services.
      • sender-to-receiver transmission latency can be reduced considerably by running brokers at geographically distant locations.
    • The networks that we used provided excellent services with very small loss rates, latency and jitter values.
    • The network connections need to be checked for high quality. Cardiff site was not even able to support 10 video streams (3Mbps), way below its full capacity (10Mbps).
  • Meeting Management Architecture and Services
    • There are three main components.
    • Meeting Management Unit starts/ends meetings, handles user joins and leaves.
    • Media Processing Unit provides; audio mixing, video mixing and image grabbing.
    • A unified framework is provided to distribute service providers and to manage the interactions among system components.
  • Messaging Among System Components
    • Although some messages are sent to a group of destinations, many messages are destined to one target. Therefore, an efficient message exchange mechanism should be designed.
    • We use reliable JMS messages to provide communications among various components in the system.
    • This simplifies building a scalable solution, since messages can be delivered to multiple destinations without explicit knowledge of the publisher.
    • Messaging Semantics
      • Request/Response messaging
      • Group messaging
      • Event based messaging
  • Topic Naming Conventions
    • Two types of topics are needed; group topics and unique component topics
    • All topic names start with a common root, GlobalMMCS.
    • Group topic names are constructed by adding the component name to the root
      • GlobalMMCS/AudioSession
      • GlobalMMCS/AudioMixerServer
    • Unique component topic names are constructed by adding the unique ids:
      • GlobalMMCS/AudioSession/<sessionID>
      • GlobalMMCS/AudioMixerServer/<serverID>
    • Sometimes a component communicates with many different components; in that case, there is one more layer to distinguish these communication channels
      • GlobalMMCS/AudioSession/<sessionID>/AudioMixerServer
      • GlobalMMCS/AudioSession/<sessionID>/RtpLinkManager
  • Service Distribution Framework
    • A unified framework to distribute many types of service providers
    • Addressing : Each service provider and consumer is identified by a unique topic name.
    • Service Discovery : Dynamic discovery mechanism. Inquiry & ServiceDescription messages.
    • Service Selection: the consumer selects the best service provider.
    • Service Execution: the consumer executes the service by sending an Request message.
    • Advantages:
      • Fault tolerant
      • Scalable
      • Location independent
  • Session Management
    • Audio and video sessions are managed separately.
    • AudioSession objects manage audio sessions and VideoSession objects manage video sessions
    • MeetingManager objects act as factories for session objects. They initialize and end them.
    • AudioSession and VideoSession objects provide session management services to participants, such as user joins and leaves. While handling these requests, they usually talk to other system components, such as media processing units and RTP link managers.
  • JMS message paths for an AudioSession
  • Audio Mixing & Performance Tests
    • 6 speakers in each mixer. Two of them were continually talking.
    • One more audio stream constructed with the mixed stream of all speakers.
    • All streams were 64kbps ULAW.
    • The machine: WinXP, 512 MB memory, 2.5 GHz Intel Pentium 4 CPU.
    • This machine can support around 20 audio mixing sessions
    Negl. Loss 46 20 No loss 34 15 No loss 24 10 No loss 12 5 Quality CPU usage % Number Of mixers
  • JMS message paths for a VideoSession
  • Video Mixing & Performance Tests
    • Four video streams are mixed into one video stream.
    • Incoming video streams were 150 kbps H.261 stream.
    • Mixed video stream was H.263 with 18 fps.
    • Linux machine with 1 GB memory and 1.8GHz Dual Intel Xeon CPU.
    • Only 3 video mixers are served.
    94 4 68 3 42 2 20 1 CPU usage % Number of Video Mixers
  • Mixed video streams in various media players
  • Image Grabbing & Performance Tests
    • The purpose of image grabbing is to provide users with a meaningful video stream list in a session.
    • Generated images can either be published on topics on the broker network or can be saved to files.
    • An image is saved every 60sec to the disk in JPEG format.
    • The video stream was an H.261 stream with an av. bw of 150 kbps.
    • 50 image grabbers can be supported on this machine.
    • The same machine as video mixing test.
    70 50 60 40 50 30 35 20 15 10 CPU usage % Number of IG
  • Media Processing Service Distribution
    • MediaServers act as factories for service providers. They start/stop/advertise them.
    • Each MediaServer is independent of others. New ones can be added dynamically .
    • Service providers can either be started from command line when starting the service container, or they can be started by using the MediaServerManager.
    • New services are assigned to least loaded media processing units.
  • Conclusion
    • Main Contributions:
      • Proposing a new architecture for scalable videoconferencing that separates media distribution, media delivery and meeting management
      • Investigating the issues related to using publish/subscribe systems for real-time audio/video delivery
      • Analyzing the performance and the scalability of NaradaBrokering broker network for the distribution of real-time audio and video streams in videoconferencing sessions.
      • Implementing a meeting management and media processing service distribution mechanism based on publish/subscribe middleware.
    • Future Works
      • Media processing service distribution algorithms may be developed for large scale deployments
      • Audio/video stream delivery through firewalls need to be investigated
      • More performance tests can be conducted with higher number of brokers
  • Publications
    • Ahmet Uyar, Wenjun Wu, Geoffrey Fox. “Service-Oriented Architecture for a Scalable Videoconferencing System”. Submitted to IEEE International Conference on Pervasive Services 2005 (ICPS'05) 11-14 July 2005, Santorini, Greece.
    • A. Uyar, G. Fox. “Investigating the Performance of Audio/Video Service Architecture I: Single Broker”. To be presented at The International Symposium on Collaborative Technologies and Systems. May 2005, Missouri, USA.
    • A. Uyar, G. Fox. “Investigating the Performance of Audio/Video Service Architecture II: Broker Network.” To be presented at The International Symposium on Collaborative Technologies and Systems. May 2005, Missouri, USA.
    • Ahmet Uyar, Shrideep Pallickara, Geoffrey Fox. “Towards an Architecture for Audio/Video Conferencing in Distributed Brokering Systems”. The proceedings of The 2003 International Conference on Communications in Computing, June 23 - 26, Las Vegas, Nevada, USA.