Experiences with High-Definition Video Conferencing

529 views

Published on

  • Be the first to comment

  • Be the first to like this

Experiences with High-Definition Video Conferencing

  1. 1. Experiences with High-Definition Video Conferencing Colin Perkins, Alvaro Saurin University of Glasgow, Department of Computing Science Ladan Gharai, Tom Lehman University of Southern California, Information Sciences Institute
  2. 2. Talk Outline • Scaling multimedia conferencing • The UltraGrid system – Hardware requirements – Software architecture • Experimental performance • Challenges in congestion control • Conclusions Copyright © 2006 University of Glasgow All rights reserved.
  3. 3. Scaling Multimedia Conferencing • Given advances in system power, network bandwidth and video cameras, why are video conferencing environments so limited? Why are we stuck with low quality 288 images…? 352 Copyright © 2006 University of Glasgow All rights reserved.
  4. 4. Copyright © 2006 University of Glasgow All rights reserved. Why do conferencing systems look like this?
  5. 5. Copyright © 2006 University of Glasgow All rights reserved. …and not like this?
  6. 6. Research Objectives • To explore the problems inherent in delivering high definition interactive multimedia over IP: – Related to the protocols – Related to the network – Related to the end-system • To push the limits of: – Image resolution, frame rate and quality – Network and end system capacity • To demonstrate the ability of best effort IP networks to support high quality, high rate, media with effective congestion control Copyright © 2006 University of Glasgow All rights reserved.
  7. 7. UltraGrid: High Definition Conferencing Build an HDTV conferencing demonstrator: • Standard protocols 1999 HDTV work at ISI starts – RTP over UDP/IP Nov. 2001 Demo at SC’01, Denver – HDTV payload formats & TFRC profile (24 bit colour, 45 fps ⇒ 650Mbps – Best effort congestion controlled delivery Jan. 2002 Public code release no additional QoS (BSD-style open source license) • Commodity networks Nov. 2002 Demo at SC’02, Baltimore – High performance IP networks (24 bit colour, 60fps ⇒ 1.0 Gbps) • OC-48 or higher Apr. 2005 Full uncompressed HDTV • Competing with other IP traffic (30 bit colour, 60fps ⇒ 1.2 Gbps) – Local area up to 10 gigabit Ethernet Sep. 2005 RFC 4175 • Commodity end systems Demo at iGrid’05, San Diego – PC or similar workstation Nov. 2005 Demo at SC’05, Seattle – HDTV capture and display Copyright © 2006 University of Glasgow UltraGrid: The first HDTV conferencing system using commodity hardware All rights reserved.
  8. 8. Media Formats and Equipment • Capture and transmit a range of video formats: – Standard definition video: • IEEE 1394 + DV camera – High definition video: • DVS HDstation or Centaurus capture card – 100MHz PCI-X – 720p/1080i HDTV capture from SMPTE-292M – Approx. $6,000 • Video data rates up to 1.2Gbps • Chelsio T110 10-gigabit Ethernet • Dual processor Linux 2.6 system 1280 Copyright © 2006 University of Glasgow 720 720 576 352 All rights reserved. 288 CIF/288 lines PAL/576 lines HDTV/720p
  9. 9. Media Formats and Equipment • A variety of HDTV cameras are now available: – Broadcast quality cameras: • Generally expensive ~$20,000 – Panasonic AJ-HDC27F – Thomson LDK 6000 • SMPTE-292M output ⇒ directly connect to UltraGrid, low latency – Consumer grade cameras: • Price is in the $3,000–5,000 range – Sony HVR-Z1E, HDR-FX1 – JVC GY-HD-100U HDV Pro • No SMPTE-292M output ⇒ converter needed (E.g. AJA HD10A), higher latency • Displays must accommodate: Copyright © 2006 University of Glasgow – 16:9 aspect ratio All rights reserved. – 1280×720 progressive or 1920×1080 interlaced
  10. 10. Software Architecture • Classical media tool architecture – Video capture and display – Video codecs • DV and M-JPEG only at present, others can be added – RTP – Adaptive video playout buffer • Two interesting additions: – Congestion control over RTP Copyright © 2006 University of Glasgow – Sophisticated video sending buffer All rights reserved.
  11. 11. Experimental Performance AJ-HDC27F Seattle, WA SC 2005 UG sender 10 G bs Et hern UG et receiver LDK 6000 Chicago Arlington, VA Los Angeles UG sender ISI-East • Wide area HDTV tests OC -19 on the Internet2 backbone 2S ON ET /SD UG H – ISI-East ⇔ ISI-West receiver – ISI-East ⇔ Denver (SC’01) Houston – ISI-East ⇔ Seattle (SC’05) • Demonstrated interactive low-latency uncompressed HDTV conferencing between ISI-East and Seattle at SC’05 – Gigabit rate bi-directional video flows (tested using both HOPI and Abilene) Copyright © 2006 University of Glasgow • Ongoing low-rate tests between ISI-East and Glasgow using 25 Mbps DV All rights reserved. format video
  12. 12. Copyright © 2006 University of Glasgow All rights reserved.
  13. 13. Experimental Performance • Environment: – Seattle ⇔ ISI-East over Abilene; 14-18 November 2005 – Best effort IP service, non-QoS enabled, shared with production traffic – 8,800 byte packets; 10 gigabit Ethernet w/jumbo frames; OC-192 WAN • Packet loss: – Overwhelming majority of RTCP reports showed no packet loss – Occasional transient loss (≤0.04%) observed due to cross traffic • Inter-packet interval: Inter-packet interval (measured at receiver) – Inter-packet interval (jitter) shows expected sharp peak with long tail – Network disrupts packet timing: Copyright © 2006 University of Glasgow not significant for the application • Playout jitter buffer compensates All rights reserved.
  14. 14. Deployment Issues • Good performance on Internet2 – usable today – Observe occasional loss due to transient congestion • HDTV flows not TCP-Friendly, cause transient disruption during loss periods – Cannot support large numbers of uncompressed HDTV flows • But active user community exists in well provisioned regions of the network (UltraGrid nodes in US, Canada, Korea, Spain, Czech Republic...) • Two approaches to wider deployment – Optical network provisioning and/or quality of service • E.g. Internet2 hybrid optical packet network (HOPI) also used for some tests • Possible, solves problem, but expensive and hard to deploy widely • Necessary for guaranteed-use deployments – Congestion control Copyright © 2006 University of Glasgow • Adaptive video transmission rate to match network capacity • Preferred end-to-end approach for incremental, on demand, deployment All rights reserved. • Necessary for safety, even if QoS provisioned network available
  15. 15. Congestion Control for Interactive Video • TCP not suitable for interactive video – Abrupt variations in sending rate – Couples congestion control and reliability – Too slow • Obvious alternative: TCP-Friendly rate control (TFRC) – Well specified, widely studied rate-based congestion control – Aims to provide relatively smooth variations in sending rate – Doesn’t couple congestion response and reliability – Two implementation choices: • DCCP implementations not mature • Use DCCP with CCID 3 • Deployment challenges due to firewalls • Not feasible to use at this time • Use RTP profile for TFRC Copyright © 2006 University of Glasgow • Can be deployed in end systems only (running over UDP) All rights reserved. • Easy to develop, deploy, debug and experiment with code
  16. 16. TFRC Implementation • Rate based algorithm, clocking packets from sending buffer • Sending buffer size chosen to respect 150ms one way latency constraint (⇒ a couple of frames) • Rate based control driving queuing system: – Widely spaced (16ms) bursts of data from codec – Fast, smoothly paced, transmission (~70µs spacing) • Mismatched adaptation rates Copyright © 2006 University of Glasgow – TFRC ⇒ O(round-trip time) – Codec ⇒ O(inter-frame time) All rights reserved. – Relies on buffering to align rates, varies codec rate ⇒ problematic for stability
  17. 17. TFRC Performance Transport protocol stable on large RTT paths, less stable for shorter paths Video rate can follow congestion control rate, provided frame rate and RTT similar Desired vs. actual sending rate Copyright © 2006 University of Glasgow All rights reserved. 100ms RTT, 800kbps bottleneck, 10 fps M-JPEG Testing in dummynet Throughput with varying RTT
  18. 18. Implications and Conclusions • Well engineered IP networks can support very high performance interactive multimedia applications – The current Internet2 best effort IP service provides real-time performance suitable for gigabit rate interactive video when shared with other traffic – Transient congestion causes occasional transient packet loss, but recall that we added a gigabit rate real-time video flow to an existing network without re-engineering that network to support it • Initial congestion control experiments raise more questions than they answer – Possible to implement, but more sophisticated codecs needed – Difficult to match codec and network rates, causes bursty behaviour Copyright © 2006 University of Glasgow • Impact on perceptual quality due to implied quality variation unclear • Likely easier as video quality, frame-rate, and network bandwidth increase All rights reserved.
  19. 19. UltraGrid Copyright © 2006 University of Glasgow http://ultragrid.dcs.gla.ac.uk/ A High Definition Collaboratory All rights reserved.

×