Define the 4 terms… TCP/UDP/IP suite (internet networking) provides best-effort delivery -- No guarantees about end-to-end delay -- No guarantees about variance of packet delay within stream Because of lack of special effort to deliver packets in timely manner, multimedia hard!
Phone with delay? On the Internet and in other networks, Quality of Service (QoS) is the idea that transmission rates, error rates, and other characteristics can be measured, improved, and, to some extent, guaranteed in advance. Schemes in which certain packets are given higher priority
-- Variable bitrate media (where number of packets for “unit” of media varies) makes this relationship hard to model -- possible question here.
Scalable encoding will be discussed a little later (built into encoding schemes)
Pulse Code Modulation: PCM: 8000 x 8 = 64 Kb/s PCM (CD): 44100 x 16 = 715.6 Kb/s (mono) x 2 (stereo)
Add animation/block diagrams t
MPEG-1: MPEG-1 was the original MPEG standard, designed exclusively for computer use. It allows a 320x240, 30 frame per second video. MPEG-2: MPEG-2 is a higher-resolution version of MPEG-1 designed for digital television broadcast. MPEG-3: MPEG-3 was designed for HDTV. However, HDTV is just normal TV with a higher resolution and frame rate, so this standard was folded into MPEG-2 and is no longer used. MPEG-4: MPEG-4 is a new standard for digital video scheduled for release in November 1998. It is designed for use over low-bit-rate wireless and mobile communication systems. DCT cannot provide the required compression to operate over this type of line, so MPEG-4 does not enforce an encoding method. Instead, it leaves the choice of encoding method up to the designer. MPEG-7: MPEG-7 is yet another standard for digital video that has barely begun. The focus of MPEG-7 is supposed to be designing a representation for digital video that allows it to be stored and queried by content in a video database.
Streaming often requested thru web client (browser) Audio/video not tightly integrated with broswer, so helper applications needed (media player)
Audio/video break up… Take simple case. A simple architecture is to have the Browser requests the object(s) and after their reception pass them to the player for display - No pipelining
Web browser requests and receives a Meta File ___more details___ HERE (a file describing the object) instead of receiving the file itself; Browser launches the appropriate Player and passes it the Meta File; Player sets up a TCP connection with Web Server and downloads the file -- TCP bad…. Delays… -- HTTP bad… No control (Not rich enough)
2 servers: HTTP (meta data), streaming (media)
Use UDP, and Server sends at a rate (Compression and Transmission) appropriate for client; to reduce jitter, Player buffers initially for 2-5 seconds, then starts display Use TCP, and sender sends at maximum possible rate under TCP; retransmit when error is encountered; Player uses a much large buffer to smooth delivery rate of TCP -- Chance of starvation with small buffer
For user to control display: rewind, fast forward, pause, resume, etc… Out-of-band protocol (uses two connections, one for control messages (Port 554) and for media stream) RFC 2326 permits use of either TCP or UDP for the control messages connection, sometimes called the RTSP Channel As before, meta file is communicated to web browser which then launches the Player; Player sets up an RTSP connection for control messages in addition to the connection for the streaming media
Bit rate is 8 KBytes, and every 20 msec, the sender forms a packet of 160 Bytes + a header to be discussed below Ideal case, this just works, but…. (next slide)
Loss is in a broader sense: packet never arrives or arrives later than its scheduled playout time
With fixed playout delay, the delay should be as small as possible without missing too many packets
The playout delay is computed for each talk spurt based on observed average delay and observed deviation from this average delay The beginning of a talk spurt is identified from examining the timestamps in successive and/or sequence numbers of chunks U can ask a couple of questions here: what are disadvantages; where did we use avg. delay estimation etc. before?
Mixed quality streams are used to include redundant duplicates of chunks Upon loss playout available redundant chunk (a lower quality one) Can recover from single losses
Has no redundancy, but can cause delay in playout beyond Real Time requirements… why does it work?
Thin protocol As is, doesn’t specify everything you need Serves as a skeleton that can be filled out
Framing, that is identifying and handling a unit of transfer suitable to the application level, (ALF principle). Multiplexing, in case we want to put different media within the same stream. Real-time delivery, possibly sacrificing reliability for timeliness. (but does not in itself provide this) Synchronization, both within successive packets in a stream and between different media in different streams. Feedback, on who is receiving data with what quality
Multiple streams can be differentiated. Can tell if packet dropped.
Control session associated with each RTP session. Control session has same participants as RTP session. Provides feedback on quality of data stream. Carries information to associate multiple streams as coming from the same participant. Scalable way of estimating group size. Synchronization mechanism across streams from same participant.
If each receiver sends RTCP packets to all other receivers, the traffic load resulting can be large RTCP adjusts the interval between reports based on the number of participating receivers Typically, limit the RTCP bandwidth to 5% of the session bandwidth, divided between the sender reports (25%) and the receivers reports (75%)
Multimedia Networking Ashwin Bharambe Carnegie Mellon University 15-441 Networking, Spring 2003 Tuesday, April 22, 2003
What media are we talking about? Audio and/or Video This lecture concerned about: