This document discusses multimedia applications and protocols. It begins by outlining the key differences between multimedia and classic applications, such as multimedia being highly delay-sensitive but loss-tolerant. It then covers the main classes of multimedia applications and their requirements and constraints. The document also examines problems with today's Internet for multimedia like limited bandwidth, packet jitter, and loss, along with solutions to address these issues. Finally, it introduces common multimedia protocols including RTP for framing and synchronization and RTCP for feedback.
The document discusses multimedia and multimedia applications. It covers:
1. The differences between multimedia applications and classic applications in terms of delay sensitivity and packet loss tolerance.
2. The main classes of multimedia applications including streaming stored audio/video, streaming live audio/video, and real-time interactive audio/video. It outlines the requirements and constraints of each class.
3. Problems with delivering multimedia over the Internet like limited bandwidth, packet jitter, and packet loss. It describes solutions to these problems including compression techniques, playout buffering, and forward error correction.
4. Common multimedia protocols including RTP for framing/multiplexing/synchronization and RTCP for feedback. It explains how these
This document provides an overview of digital audio compression techniques. It discusses how audio compression removes redundant or irrelevant information to reduce required storage space and transmission bandwidth. It describes how psychoacoustic modeling is used to eliminate inaudible components based on principles of masking. Spectral analysis is performed using transforms or filter banks to determine masking thresholds. Noise allocation quantizes frequency components to minimize noise while meeting thresholds. Additional techniques like predictive coding, coupling/delta encoding, and Huffman coding provide further compression. The encoding process involves analyzing, quantizing, and packing audio data into frames for storage or transmission.
This document summarizes a tutorial on video over 802.11 networks. It discusses motivations for using 802.11 for video, outlines various use cases and their requirements. It then covers challenges of transmitting video over wireless like interference, limited channels and non-deterministic medium access. Current 802.11 mechanisms for video are outlined along with their limitations. Possible areas for further work are identified like content-aware techniques and inter-layer communication. Related activities outside 802.11 are also briefly mentioned.
This document provides an overview of MPEG Audio Compression Layer 3 (MP3). It discusses how MP3 was developed under EUREKA project EU147 for Digital Audio Broadcasting. It achieves compression ratios of over 12:1 for CD-quality audio using psychoacoustic models to remove inaudible components. The encoder uses filter banks and quantization with Huffman coding, while controlling distortion and rate through nested feedback loops.
Audio Compression Techniques
a type of lossy or lossless compression in which the amount of data in a recorded waveform is reduced to differing extents for transmission respectively with or without some loss of quality, used in CD and MP3 encoding, Internet radio.
Dynamic range compression, also called audio level compression, in which the dynamic range, the difference between loud and quiet, of an audio waveform is reduced
Audio compression can be either lossless, which reduces file size while retaining all audio information, or lossy, which greatly reduces file size but decreases sound quality by losing some audio information. Common lossless formats are AIFF, WAV, and FLAC, while common lossy formats are MP3, AAC, and Vorbis. The quality and size of compressed audio files depends on factors like sample rate, bit depth, bit rate, and number of channels. Higher values for these factors generally mean higher quality audio but larger file sizes.
This document summarizes multipoint communication and IP multicast technologies. It discusses various multipoint applications and challenges at different layers. It then covers IP multicast addressing, routing algorithms like flooding, spanning trees, reverse path forwarding. Multicast routing protocols like DVMRP, MOSPF and PIM are explained. IGMP is described for host membership reporting. Transport layer protocols for reliable multicast are also mentioned.
This document discusses various topics related to data compression including compression techniques, audio compression, video compression, and standards like MPEG and JPEG. It covers lossless versus lossy compression, explaining that lossy compression can achieve much higher levels of compression but results in some loss of quality, while lossless compression maintains the original quality. The advantages of data compression include reducing file sizes, saving storage space and bandwidth.
The document discusses multimedia and multimedia applications. It covers:
1. The differences between multimedia applications and classic applications in terms of delay sensitivity and packet loss tolerance.
2. The main classes of multimedia applications including streaming stored audio/video, streaming live audio/video, and real-time interactive audio/video. It outlines the requirements and constraints of each class.
3. Problems with delivering multimedia over the Internet like limited bandwidth, packet jitter, and packet loss. It describes solutions to these problems including compression techniques, playout buffering, and forward error correction.
4. Common multimedia protocols including RTP for framing/multiplexing/synchronization and RTCP for feedback. It explains how these
This document provides an overview of digital audio compression techniques. It discusses how audio compression removes redundant or irrelevant information to reduce required storage space and transmission bandwidth. It describes how psychoacoustic modeling is used to eliminate inaudible components based on principles of masking. Spectral analysis is performed using transforms or filter banks to determine masking thresholds. Noise allocation quantizes frequency components to minimize noise while meeting thresholds. Additional techniques like predictive coding, coupling/delta encoding, and Huffman coding provide further compression. The encoding process involves analyzing, quantizing, and packing audio data into frames for storage or transmission.
This document summarizes a tutorial on video over 802.11 networks. It discusses motivations for using 802.11 for video, outlines various use cases and their requirements. It then covers challenges of transmitting video over wireless like interference, limited channels and non-deterministic medium access. Current 802.11 mechanisms for video are outlined along with their limitations. Possible areas for further work are identified like content-aware techniques and inter-layer communication. Related activities outside 802.11 are also briefly mentioned.
This document provides an overview of MPEG Audio Compression Layer 3 (MP3). It discusses how MP3 was developed under EUREKA project EU147 for Digital Audio Broadcasting. It achieves compression ratios of over 12:1 for CD-quality audio using psychoacoustic models to remove inaudible components. The encoder uses filter banks and quantization with Huffman coding, while controlling distortion and rate through nested feedback loops.
Audio Compression Techniques
a type of lossy or lossless compression in which the amount of data in a recorded waveform is reduced to differing extents for transmission respectively with or without some loss of quality, used in CD and MP3 encoding, Internet radio.
Dynamic range compression, also called audio level compression, in which the dynamic range, the difference between loud and quiet, of an audio waveform is reduced
Audio compression can be either lossless, which reduces file size while retaining all audio information, or lossy, which greatly reduces file size but decreases sound quality by losing some audio information. Common lossless formats are AIFF, WAV, and FLAC, while common lossy formats are MP3, AAC, and Vorbis. The quality and size of compressed audio files depends on factors like sample rate, bit depth, bit rate, and number of channels. Higher values for these factors generally mean higher quality audio but larger file sizes.
This document summarizes multipoint communication and IP multicast technologies. It discusses various multipoint applications and challenges at different layers. It then covers IP multicast addressing, routing algorithms like flooding, spanning trees, reverse path forwarding. Multicast routing protocols like DVMRP, MOSPF and PIM are explained. IGMP is described for host membership reporting. Transport layer protocols for reliable multicast are also mentioned.
This document discusses various topics related to data compression including compression techniques, audio compression, video compression, and standards like MPEG and JPEG. It covers lossless versus lossy compression, explaining that lossy compression can achieve much higher levels of compression but results in some loss of quality, while lossless compression maintains the original quality. The advantages of data compression include reducing file sizes, saving storage space and bandwidth.
This document summarizes a seminar presentation on audio compression techniques. It introduces common audio compression methods like PCM, DPCM, adaptive DPCM, linear predictive coding, perceptual coding, and MPEG audio coders. Specific techniques covered include third order predictive DPCM, backward and forward adaptive bit allocation used in Dolby AC-1. Applications of audio compression include conferencing, broadcasting radio programs by satellite, and saving memory space in sound cards.
This document provides an overview of MPEG-1 audio compression. It describes the key components of the MPEG-1 audio encoder including the polyphase filter bank that transforms audio into frequency subbands, the psychoacoustic model that determines inaudible parts of the signal, and the coding and bit allocation process that assigns bits to subbands. The overview concludes by noting that MPEG-1 audio provides high compression while retaining quality and paved the way for future audio compression standards.
A presentation covering some basic aspects of digital video data and the compression of video images. The ATSC system architecture is shown using the OSI 7-layer model from data communication theory. Video compression techniques are briefly covered.
This document discusses audio compression techniques. It begins by defining audio and compression. There are two main types of audio compression: lossy and lossless. Lossy compression reduces file sizes but results in some quality loss, while lossless compression decompresses the file back to its original quality. Common lossy audio compression methods are discussed, including those based on psychoacoustics involving how humans perceive sound. MPEG layers are then introduced as a standard for audio compression, with Layer I being highest quality but also highest bitrate, and Layer III providing greater compression but still high quality at lower bitrates like 64kbps. Effectiveness is shown to increase with each newer layer.
Development of a Multipurpose Audio Transmission System on the InternetTakashi Kishida
The document describes the development of a multipurpose audio transmission system called MRAT to enable robust and low-latency audio communication over the Internet for various uses. MRAT has three modes (chorus, conversation, broadcast) that can adapt to different communication scenarios by prioritizing either low delay or high robustness. The system was tested successfully in distance learning and chorus applications.
The document discusses MPEG-2 transport streams, which allow multiplexing of audio, video and other data into a single format suitable for transmission and storage. It describes the two multiplexing methods - program streams designed for error-free applications, and transport streams using fixed size packets for lossy applications. Transport streams carry multiple programs using packet identification and program mapping tables to associate elementary streams.
Advances in Network-adaptive Video StreamingVideoguy
1. Advances in network-adaptive video streaming allow for better delivery of video over best-effort packet networks by jointly optimizing source coding, signal processing, and packet transport.
2. Techniques like adaptive media playout, rate-distortion optimal packet scheduling, and network-adaptive packet dependency management can reduce latency and packet loss while maximizing reconstruction quality.
3. Recent work has shown that approaches like proxy servers using rate-distortion optimized streaming, path diversity, and accelerated retroactive decoding can further improve streaming performance over lossy networks.
The document discusses several standard and proprietary streaming media protocols. It introduces Real-Time Transport Protocol (RTP) and Real-Time Control Protocol (RTCP) which transport streaming media and provide quality of service reports. It also describes Real Time Streaming Protocol (RTSP) which provides playback controls. Synchronized Multimedia Integration Language (SMIL) is mentioned as an XML language for multimedia content. Major companies like Real, Microsoft, and Apple are noted to use similar but proprietary protocols instead of the standards.
This document discusses various approaches for multimedia conferencing including centralized, distributed, and peer-to-peer architectures. It covers key considerations like transport protocols, audio and video quality, security, and floor control. It also discusses using IP multicast versus application-level multicast and different audio mixing and adaptive playout techniques.
The document discusses audio multimedia services and provides information on various topics related to audio, including:
1. It discusses the human ear and how it perceives sound in both the time and frequency domains.
2. It provides an overview of common audio applications and modulation techniques used to transmit audio signals.
3. It describes various audio codecs, their classifications, compression techniques, latency characteristics, and examples like MP3, AAC, and speech codecs like AMR and G.711.
4. It covers topics related to speech compression techniques, files/containers for audio like WAV and common physical formats like CD.
5. It also discusses audio hardware wires and connectors both for only audio and
The document discusses audio compression techniques. It begins with an introduction to pulse code modulation (PCM) and then describes μ-law and A-law compression standards which compress audio using companding algorithms. It also covers differential PCM and adaptive differential PCM (ADPCM) techniques. The document then discusses the MPEG audio compression standard, including its encoder architecture, three layer standards (Layers I, II, III), and applications. It concludes with a comparison of various MPEG audio compression standards and references.
Digital signal processing through speech, hearing, and PythonMel Chua
Slides from PyCon 2013 tutorial reformatted for self-study. Code at https://github.com/mchua/pycon-sigproc, original description follows: Why do pianos sound different from guitars? How can we visualize how deafness affects a child's speech? These are signal processing questions, traditionally tackled only by upper-level engineering students with MATLAB and differential equations; we're going to do it with algebra and basic Python skills. Based on a signal processing class for audiology graduate students, taught by a deaf musician.
The document discusses applications and simulations of error correction coding (ECC) for multicast file transfer. It provides an overview of different ECC and feedback-based multicast protocols and evaluates their performance based on simulations. Reed-Solomon coding on blocks provided faster decoding times than on entire files, while tornado coding had the fastest decoding but required slightly more packets for reconstruction. Simulations of protocols like MFTP and MFTP/EC using network simulators showed that using ECC like Reed-Muller codes significantly improved performance over regular MFTP.
This document summarizes key concepts related to digitizing audio/video for transmission over IP networks. It discusses how analog signals are converted to digital, encoding standards that tradeoff quality for size, and protocols like RTP and RTCP that add sequencing and timing to allow reconstruction of signals despite variable network delays. Real-time transmission requires timely delivery, and buffering at receivers can compensate for small jitter but not packet loss. Standards like H.323 and SIP define signaling protocols to set up multimedia sessions and calls over IP networks.
The document provides an overview of the TCP/IP model and networking concepts like Ethernet, ARP, IP, TCP and ICMP. It describes each layer of the OSI model and how protocols like IP, TCP and ARP operate at different layers. Key points covered include IP and MAC addresses, Ethernet frame format, ARP request/reply, IP and TCP header formats, ICMP message types, TCP 3-way and 4-way handshake, and TCP/IP ports and sequence numbers.
Pycon apac 2014, Taipei, Taiwan
a Real time audio spectrogram in Python 3,
importing Pyaudio, Pygame, and Pylab
with comments on native language programming
Video quality measurements can be performed using subjective, objective, and payload-based methods. Subjective methods involve human assessment while objective methods use measurement devices and are repeatable for testing and monitoring. Payload-based methods assess video quality by comparing the original and distorted video. Standardization bodies have defined various levels of measurement including transport, transaction, and content levels to analyze video quality from different perspectives.
This document introduces MPEG-4 and its capabilities for creating interactive multimedia scenes. It discusses the MPEG standards family and focuses on MPEG-4. MPEG-4 allows encoding of audio and video objects separately, placing them interactively within a 3D space. This enables interactivity, like changing viewpoints. Profiles allow tailoring implementations to specific applications. MPEG-4 also supports intellectual property management to help content owners. The document provides an example scenario of using separate audio and video objects within a 3D television news broadcast to illustrate MPEG-4's capabilities.
This document summarizes various audio compression techniques used to reduce the required storage space and transmission bandwidth for digital audio. It discusses lossless compression techniques that remove redundant data without degrading quality, as well as lossy techniques that remove inaudible or irrelevant information, resulting in smaller file sizes but some loss of quality. The key techniques described include psychoacoustic modeling to determine inaudible components, spectral analysis using transforms or filter banks, noise allocation to minimize quantization noise, and additional methods like predictive coding, coupling/delta encoding, and Huffman coding.
Review of video over IP testing tools including: video syntax analyzer, pixel based measurement indexes like PSNR and SSIM and the tools to measure them, IP based video quality testing.
Multimedia data compression challenge and their solutionshamsbhai495
The document discusses different classes of multimedia applications including streaming stored audio and video, streaming live audio and video, real-time interactive audio and video, and others. It covers challenges like limited bandwidth, packet jitter, and packet loss. It also describes solutions to these challenges including various compression techniques, fixed playout delay, forward error correction, and interleaving. Key protocols discussed are RTP for real-time transmission and RTSP for control. Examples are given of how streaming stored multimedia may work from a web server.
This document provides an overview and summary of a lecture on real-time communication over computer networks:
1. The lecture covered techniques for real-time multimedia communication implemented at the transport and application layers, including buffering to remedy jitter, forward error correction to recover lost packets, and interleaving to reduce the effects of packet loss bursts.
2. Key protocols discussed include RTP/RTCP for real-time media transport, RTSP for streaming media applications, and H.323 for videoconferencing.
3. Challenges for real-time multimedia over best-effort IP networks are meeting delay constraints for interactive applications while providing loss tolerance, and supporting large-scale multicast sessions.
This document summarizes a seminar presentation on audio compression techniques. It introduces common audio compression methods like PCM, DPCM, adaptive DPCM, linear predictive coding, perceptual coding, and MPEG audio coders. Specific techniques covered include third order predictive DPCM, backward and forward adaptive bit allocation used in Dolby AC-1. Applications of audio compression include conferencing, broadcasting radio programs by satellite, and saving memory space in sound cards.
This document provides an overview of MPEG-1 audio compression. It describes the key components of the MPEG-1 audio encoder including the polyphase filter bank that transforms audio into frequency subbands, the psychoacoustic model that determines inaudible parts of the signal, and the coding and bit allocation process that assigns bits to subbands. The overview concludes by noting that MPEG-1 audio provides high compression while retaining quality and paved the way for future audio compression standards.
A presentation covering some basic aspects of digital video data and the compression of video images. The ATSC system architecture is shown using the OSI 7-layer model from data communication theory. Video compression techniques are briefly covered.
This document discusses audio compression techniques. It begins by defining audio and compression. There are two main types of audio compression: lossy and lossless. Lossy compression reduces file sizes but results in some quality loss, while lossless compression decompresses the file back to its original quality. Common lossy audio compression methods are discussed, including those based on psychoacoustics involving how humans perceive sound. MPEG layers are then introduced as a standard for audio compression, with Layer I being highest quality but also highest bitrate, and Layer III providing greater compression but still high quality at lower bitrates like 64kbps. Effectiveness is shown to increase with each newer layer.
Development of a Multipurpose Audio Transmission System on the InternetTakashi Kishida
The document describes the development of a multipurpose audio transmission system called MRAT to enable robust and low-latency audio communication over the Internet for various uses. MRAT has three modes (chorus, conversation, broadcast) that can adapt to different communication scenarios by prioritizing either low delay or high robustness. The system was tested successfully in distance learning and chorus applications.
The document discusses MPEG-2 transport streams, which allow multiplexing of audio, video and other data into a single format suitable for transmission and storage. It describes the two multiplexing methods - program streams designed for error-free applications, and transport streams using fixed size packets for lossy applications. Transport streams carry multiple programs using packet identification and program mapping tables to associate elementary streams.
Advances in Network-adaptive Video StreamingVideoguy
1. Advances in network-adaptive video streaming allow for better delivery of video over best-effort packet networks by jointly optimizing source coding, signal processing, and packet transport.
2. Techniques like adaptive media playout, rate-distortion optimal packet scheduling, and network-adaptive packet dependency management can reduce latency and packet loss while maximizing reconstruction quality.
3. Recent work has shown that approaches like proxy servers using rate-distortion optimized streaming, path diversity, and accelerated retroactive decoding can further improve streaming performance over lossy networks.
The document discusses several standard and proprietary streaming media protocols. It introduces Real-Time Transport Protocol (RTP) and Real-Time Control Protocol (RTCP) which transport streaming media and provide quality of service reports. It also describes Real Time Streaming Protocol (RTSP) which provides playback controls. Synchronized Multimedia Integration Language (SMIL) is mentioned as an XML language for multimedia content. Major companies like Real, Microsoft, and Apple are noted to use similar but proprietary protocols instead of the standards.
This document discusses various approaches for multimedia conferencing including centralized, distributed, and peer-to-peer architectures. It covers key considerations like transport protocols, audio and video quality, security, and floor control. It also discusses using IP multicast versus application-level multicast and different audio mixing and adaptive playout techniques.
The document discusses audio multimedia services and provides information on various topics related to audio, including:
1. It discusses the human ear and how it perceives sound in both the time and frequency domains.
2. It provides an overview of common audio applications and modulation techniques used to transmit audio signals.
3. It describes various audio codecs, their classifications, compression techniques, latency characteristics, and examples like MP3, AAC, and speech codecs like AMR and G.711.
4. It covers topics related to speech compression techniques, files/containers for audio like WAV and common physical formats like CD.
5. It also discusses audio hardware wires and connectors both for only audio and
The document discusses audio compression techniques. It begins with an introduction to pulse code modulation (PCM) and then describes μ-law and A-law compression standards which compress audio using companding algorithms. It also covers differential PCM and adaptive differential PCM (ADPCM) techniques. The document then discusses the MPEG audio compression standard, including its encoder architecture, three layer standards (Layers I, II, III), and applications. It concludes with a comparison of various MPEG audio compression standards and references.
Digital signal processing through speech, hearing, and PythonMel Chua
Slides from PyCon 2013 tutorial reformatted for self-study. Code at https://github.com/mchua/pycon-sigproc, original description follows: Why do pianos sound different from guitars? How can we visualize how deafness affects a child's speech? These are signal processing questions, traditionally tackled only by upper-level engineering students with MATLAB and differential equations; we're going to do it with algebra and basic Python skills. Based on a signal processing class for audiology graduate students, taught by a deaf musician.
The document discusses applications and simulations of error correction coding (ECC) for multicast file transfer. It provides an overview of different ECC and feedback-based multicast protocols and evaluates their performance based on simulations. Reed-Solomon coding on blocks provided faster decoding times than on entire files, while tornado coding had the fastest decoding but required slightly more packets for reconstruction. Simulations of protocols like MFTP and MFTP/EC using network simulators showed that using ECC like Reed-Muller codes significantly improved performance over regular MFTP.
This document summarizes key concepts related to digitizing audio/video for transmission over IP networks. It discusses how analog signals are converted to digital, encoding standards that tradeoff quality for size, and protocols like RTP and RTCP that add sequencing and timing to allow reconstruction of signals despite variable network delays. Real-time transmission requires timely delivery, and buffering at receivers can compensate for small jitter but not packet loss. Standards like H.323 and SIP define signaling protocols to set up multimedia sessions and calls over IP networks.
The document provides an overview of the TCP/IP model and networking concepts like Ethernet, ARP, IP, TCP and ICMP. It describes each layer of the OSI model and how protocols like IP, TCP and ARP operate at different layers. Key points covered include IP and MAC addresses, Ethernet frame format, ARP request/reply, IP and TCP header formats, ICMP message types, TCP 3-way and 4-way handshake, and TCP/IP ports and sequence numbers.
Pycon apac 2014, Taipei, Taiwan
a Real time audio spectrogram in Python 3,
importing Pyaudio, Pygame, and Pylab
with comments on native language programming
Video quality measurements can be performed using subjective, objective, and payload-based methods. Subjective methods involve human assessment while objective methods use measurement devices and are repeatable for testing and monitoring. Payload-based methods assess video quality by comparing the original and distorted video. Standardization bodies have defined various levels of measurement including transport, transaction, and content levels to analyze video quality from different perspectives.
This document introduces MPEG-4 and its capabilities for creating interactive multimedia scenes. It discusses the MPEG standards family and focuses on MPEG-4. MPEG-4 allows encoding of audio and video objects separately, placing them interactively within a 3D space. This enables interactivity, like changing viewpoints. Profiles allow tailoring implementations to specific applications. MPEG-4 also supports intellectual property management to help content owners. The document provides an example scenario of using separate audio and video objects within a 3D television news broadcast to illustrate MPEG-4's capabilities.
This document summarizes various audio compression techniques used to reduce the required storage space and transmission bandwidth for digital audio. It discusses lossless compression techniques that remove redundant data without degrading quality, as well as lossy techniques that remove inaudible or irrelevant information, resulting in smaller file sizes but some loss of quality. The key techniques described include psychoacoustic modeling to determine inaudible components, spectral analysis using transforms or filter banks, noise allocation to minimize quantization noise, and additional methods like predictive coding, coupling/delta encoding, and Huffman coding.
Review of video over IP testing tools including: video syntax analyzer, pixel based measurement indexes like PSNR and SSIM and the tools to measure them, IP based video quality testing.
Multimedia data compression challenge and their solutionshamsbhai495
The document discusses different classes of multimedia applications including streaming stored audio and video, streaming live audio and video, real-time interactive audio and video, and others. It covers challenges like limited bandwidth, packet jitter, and packet loss. It also describes solutions to these challenges including various compression techniques, fixed playout delay, forward error correction, and interleaving. Key protocols discussed are RTP for real-time transmission and RTSP for control. Examples are given of how streaming stored multimedia may work from a web server.
This document provides an overview and summary of a lecture on real-time communication over computer networks:
1. The lecture covered techniques for real-time multimedia communication implemented at the transport and application layers, including buffering to remedy jitter, forward error correction to recover lost packets, and interleaving to reduce the effects of packet loss bursts.
2. Key protocols discussed include RTP/RTCP for real-time media transport, RTSP for streaming media applications, and H.323 for videoconferencing.
3. Challenges for real-time multimedia over best-effort IP networks are meeting delay constraints for interactive applications while providing loss tolerance, and supporting large-scale multicast sessions.
The document discusses digital transmission fundamentals, including:
- Digital representation of analog signals involves sampling, quantization, and pulse code modulation.
- The sampling rate must be at least twice the bandwidth of the signal to allow perfect reconstruction.
- Quantization maps samples to discrete levels, introducing quantization error. More levels reduce error but increase transmission bandwidth needs.
- Digital transmission enables long distance communication by regeneration of the digital signal rather than analog amplification, overcoming distance limitations of analog systems.
classes of Multimedia_Currently, multimedia has become a very common method o...JeyaPerumal1
Multimedia is something that we often encounter around us. I don’t know what form, but still, multimedia is a very interesting thing to discuss. Many types of multimedia can be known. Even multimedia can also be referred to as an advanced technology that facilitates the dissemination of information to the general public.
This document discusses various topics related to multimedia, including image and video compression techniques, audio coding standards, and synchronizing multiple media using SMIL. It provides examples of lossy and lossless compression methods and explains how they work. Key compression algorithms mentioned are JPEG, GIF, MPEG, and MP3. Streaming media delivery and the factors that affect it are also covered.
The document discusses internet video streaming versus IPTV and the challenges of streaming multimedia over the internet. It covers topics like the difference between internet video and IPTV, characteristics of multimedia streaming, challenges of UDP for streaming, and suggestions to improve streaming stability and quality of service. It suggests standardizing congestion control algorithms and using techniques like forward error correction to improve reliability of multimedia streams over UDP.
The document provides information about a multimedia streaming module, including:
- The module code, title, level, and credit value
- Assessment requirements including creating a live/on-demand streaming media station and accompanying website
- An overview of topics covered in the module like media encoding, streaming servers, planning live broadcasts, and streaming non-audio/video content
- Considerations for streaming like file sizes, frame rates, formats, and computer hardware requirements
This document discusses protocols for real-time multimedia applications such as voice over IP. It introduces the Real-Time Protocol (RTP) which specifies packet structures for carrying audio and video data. RTP runs on top of UDP, providing functions like payload type identification, sequence numbering, and time stamping. It allows for interoperability between multimedia applications that both implement RTP. The document also discusses the Session Initiation Protocol (SIP) which is used to initialize multimedia sessions and exchange session description and control messages.
This document discusses multimedia concepts including audio encoding, video encoding, and digital formats. It provides information on how audio is converted to digital form through sampling and quantization. Key video encoding concepts covered include luminance, chrominance, resolution, and frame rate. Common audio formats like WAV, AIFF, and video formats like MPEG, AVI are also summarized. The document concludes that a lack of standardization across formats has made building multimedia systems more challenging.
Mohammed Hussein's document discusses different types of multimedia services including streaming stored audio/video, streaming live audio/video, and interactive real-time audio/video. It provides approaches to streaming stored audio/video files and discusses applications like video on demand. Characteristics of real-time interactive services are described such as time relationship, jitter, timestamps, playback buffers, and ordering. Applications like video conferencing and voice over IP are also summarized along with protocols used.
The document discusses audio and video streaming over the internet. It covers protocols like TCP, UDP, RTP and RTSP that are used for real-time media streaming. It also discusses error correction techniques like piggybacking and interleaving. Various streaming media delivery methods are described like live broadcasting, video on demand, and video conferencing. Limitations of streaming media and popular streaming servers are also summarized.
The Real-time Transport Protocol (RTP) is a network protocol for delivering audio and video over IP networks. RTP is used in communication and entertainment systems that involve streaming media, such as telephony, video teleconference applications including WebRTC, television services and web-based push-to-talk features.
Linear Programming Case Study - Maximizing Audio QualitySharad Srivastava
This document presents a linear programming problem to maximize audio quality for real-time multimedia applications under bandwidth and delay constraints. It formulates the problem of optimizing codec selection to maximize MOS score given limitations of available bandwidth and delay. It provides sample codec data and implements the linear program for different network conditions, finding optimal mixes of codecs to achieve the best possible MOS within each set of constraints.
The document discusses best practices for digitizing and delivering audio and video content. It covers topics such as digitization workflows, file formats, compression standards, and streaming technologies. It also provides an overview of audio and video digitization services available at Indiana University, as well as several digital library projects involving audio and video underway there.
Streaming Video over a Wireless Network.pptVideoguy
The document discusses streaming video over wireless networks and some of the challenges involved. It outlines problems with bandwidth variability in wireless networks and introduces approaches to estimate available bandwidth more accurately to improve video streaming performance. These include using packet pair and packet train techniques to measure dispersion and develop new bandwidth estimation methods that account for the characteristics of wireless networks. The goal is to allow applications like video players to adapt streaming rates to match available capacity and reduce problems like packet loss.
This document discusses the potential for streaming video technology within corporations to improve communication with customers and employees. It outlines several challenges with streaming video, such as varying bandwidth limitations and network quality issues. It provides examples of bandwidth requirements for different video resolutions and frame rates. Overall, the document argues that while current streaming video technology has limitations, it is viable for applications like training, demonstrations and video conferencing within companies to help connect employees and customers.
The document discusses multimedia requirements and techniques for streaming audio and video over the internet. It covers three classes of multimedia applications: streaming, unidirectional real-time, and interactive real-time. Streaming applications can tolerate some delay but not packet loss. Real-time applications have strict delay requirements to avoid jitter. The document discusses protocols like RTP and RTSP that are used for multimedia streaming and techniques for recovering from jitter and packet loss like buffering, FEC, and interleaving.
RAWcooked is an open source software project that aims to optimize film and video storage through lossless compression. It takes raw film scans stored as individual TIFF or DPX frames and encodes them into a single Matroska file using the FFV1 video codec and FLAC audio codec. This results in file sizes that are 1.5-3x smaller than the uncompressed source files. The encoding is fully reversible to allow decoding back to the original source files bit-by-bit. RAWcooked aims to provide an easy workflow for digitization suppliers and film archives to optimize storage while preserving all image and metadata quality.
This document discusses network application performance and ways to improve it. It covers topics like delay, throughput, jitter, quality of service (QoS), and performance measurement tools. Key points include identifying various sources of delay like processing, retransmissions, queueing, and propagation. It also discusses transport protocols TCP and UDP, and ways to optimize TCP performance through techniques like jumbo frames, path MTU discovery, window scaling, and selective acknowledgements. The roles of different network stakeholders in ensuring good performance are also mentioned.
The document discusses digital transmission fundamentals, including:
- Digital signals are represented as sequences of bits that can take on discrete values (0 or 1). More bits are needed to represent information with higher content or complexity.
- Analog signals like voice and video need to be digitized by sampling and quantizing them. This allows the signals to be transmitted over digital networks and regenerated without degradation.
- Communication channels have bandwidth limits that constrain the rate at which information can be transmitted accurately. Channels also introduce impairments like noise, attenuation and distortion.
- Digital transmission offers advantages over analog like long-distance communication without repeated degradation and the ability to detect and correct errors.
This document provides an overview of database system concepts and architecture. It discusses different data models including conceptual, physical and implementation models. It also covers database languages, interfaces, utilities and centralized versus distributed (client-server) architectures. Specifically, it describes hierarchical and network data models, the three schema architecture, data independence, DBMS languages like DDL and DML, and different DBMS classifications including relational, object-oriented and distributed systems.
This document summarizes a workshop on measurement and estimating models for software maintenance. The workshop goals were to: 1) expose the cost estimating community to recent Army software maintenance study findings; 2) gather feedback on the Army's software maintenance work breakdown structure and influence factors; and 3) build consensus on important factors via a Delphi survey. The agenda included presentations on study findings, discussions of current and future activities, and breakout sessions. The background discussed the need for accurate software maintenance cost estimates. The study aimed to characterize software maintenance tasks, collect cost data, and develop estimation models.
Multimedia is a combination of different media types like text, graphics, audio, video and animation that is delivered interactively. The key elements of multimedia are text, graphics, audio, video and animation. Multimedia can be linear with no user interaction or non-linear with user control. Authoring tools are used to develop multimedia content. Multimedia has various applications in business, education, entertainment and more. Common multimedia products include briefing products, reference products, databases, education/training products, kiosks and entertainment/games.
The document describes XML (Extensible Markup Language), including its syntax, elements, and comparison to HTML. It also discusses XML queries using languages like XML-QL, semistructured data and mediators for data integration, and challenges facing XML adoption such as security and data sharing integration.
This document provides an overview of XML (Extensible Markup Language) and related technologies. It discusses the basics of creating an XML document including elements, attributes, and components. It then covers developing constraints for well-formed XML documents using DTDs (Document Type Definitions). Finally, it discusses using the W3C DOM (Document Object Model) API to programmatically access and manage XML documents with technologies like JavaScript.
This document discusses server-side programming using Java servlets. It begins by explaining the difference between static and dynamic web pages/server responses. Java servlets provide a way to generate dynamic responses by instantiating a servlet class in response to an HTTP request. The document then covers the basics of servlets, including the servlet lifecycle methods and using request and response objects to add content and generate the HTTP response. It also discusses retrieving and handling parameter data passed in the HTTP request, as well as using HTTP sessions to maintain state across multiple requests and pages.
This document contains lecture notes on server-side programming using Java servlets. It begins with an overview of servlets, explaining that servlets are Java classes that generate dynamic responses to HTTP requests. It then provides an example "Hello World" servlet that prints "Hello World" when accessed. It discusses key servlet concepts like the servlet lifecycle methods, obtaining parameter data from requests, using HTTP sessions to track multiple requests from the same client, and using cookies to maintain session IDs across requests.
The document discusses key concepts in database management systems including:
1) Database schemas that define the structure and relationships of data in a database.
2) Query processing which involves parsing, optimizing, and executing queries to retrieve and manipulate data.
3) Transaction management and concurrency control which ensure atomicity, consistency, isolation, and durability of transactions across concurrent users.
This document provides an overview of DRAM circuit and architecture basics. It discusses topics such as DRAM cell components, access protocols including row and column access, sense amplifiers, and address decoding. It also covers DRAM speed characteristics such as RCD, CAS latency, and row cycle time. The document traces the evolution of DRAM through technologies like FPM, EDO, SDRAM, and describes how each aimed to improve throughput and latency.
This document describes the design of a 16x8 SRAM. It is divided into four parts: 1) SRAM cell design and analysis by Shu Jiang, 2) Row decoder and wordline driver by Bhavya Daya, 3) Column decoder and column circuitry by Jaffer Sharief, and 4) Precharge circuitry and sense amplifier by Piotr Nowak. The team worked to integrate the components and test the design. Their goal was to create a working SRAM design within time constraints and learn about SRAM design processes and choices.
it describes the bony anatomy including the femoral head , acetabulum, labrum . also discusses the capsule , ligaments . muscle that act on the hip joint and the range of motion are outlined. factors affecting hip joint stability and weight transmission through the joint are summarized.
How to Make a Field Mandatory in Odoo 17Celine George
In Odoo, making a field required can be done through both Python code and XML views. When you set the required attribute to True in Python code, it makes the field required across all views where it's used. Conversely, when you set the required attribute in XML views, it makes the field required only in the context of that particular view.
LAND USE LAND COVER AND NDVI OF MIRZAPUR DISTRICT, UPRAHUL
This Dissertation explores the particular circumstances of Mirzapur, a region located in the
core of India. Mirzapur, with its varied terrains and abundant biodiversity, offers an optimal
environment for investigating the changes in vegetation cover dynamics. Our study utilizes
advanced technologies such as GIS (Geographic Information Systems) and Remote sensing to
analyze the transformations that have taken place over the course of a decade.
The complex relationship between human activities and the environment has been the focus
of extensive research and worry. As the global community grapples with swift urbanization,
population expansion, and economic progress, the effects on natural ecosystems are becoming
more evident. A crucial element of this impact is the alteration of vegetation cover, which plays a
significant role in maintaining the ecological equilibrium of our planet.Land serves as the foundation for all human activities and provides the necessary materials for
these activities. As the most crucial natural resource, its utilization by humans results in different
'Land uses,' which are determined by both human activities and the physical characteristics of the
land.
The utilization of land is impacted by human needs and environmental factors. In countries
like India, rapid population growth and the emphasis on extensive resource exploitation can lead
to significant land degradation, adversely affecting the region's land cover.
Therefore, human intervention has significantly influenced land use patterns over many
centuries, evolving its structure over time and space. In the present era, these changes have
accelerated due to factors such as agriculture and urbanization. Information regarding land use and
cover is essential for various planning and management tasks related to the Earth's surface,
providing crucial environmental data for scientific, resource management, policy purposes, and
diverse human activities.
Accurate understanding of land use and cover is imperative for the development planning
of any area. Consequently, a wide range of professionals, including earth system scientists, land
and water managers, and urban planners, are interested in obtaining data on land use and cover
changes, conversion trends, and other related patterns. The spatial dimensions of land use and
cover support policymakers and scientists in making well-informed decisions, as alterations in
these patterns indicate shifts in economic and social conditions. Monitoring such changes with the
help of Advanced technologies like Remote Sensing and Geographic Information Systems is
crucial for coordinated efforts across different administrative levels. Advanced technologies like
Remote Sensing and Geographic Information Systems
9
Changes in vegetation cover refer to variations in the distribution, composition, and overall
structure of plant communities across different temporal and spatial scales. These changes can
occur natural.
How to Setup Warehouse & Location in Odoo 17 InventoryCeline George
In this slide, we'll explore how to set up warehouses and locations in Odoo 17 Inventory. This will help us manage our stock effectively, track inventory levels, and streamline warehouse operations.
This document provides an overview of wound healing, its functions, stages, mechanisms, factors affecting it, and complications.
A wound is a break in the integrity of the skin or tissues, which may be associated with disruption of the structure and function.
Healing is the body’s response to injury in an attempt to restore normal structure and functions.
Healing can occur in two ways: Regeneration and Repair
There are 4 phases of wound healing: hemostasis, inflammation, proliferation, and remodeling. This document also describes the mechanism of wound healing. Factors that affect healing include infection, uncontrolled diabetes, poor nutrition, age, anemia, the presence of foreign bodies, etc.
Complications of wound healing like infection, hyperpigmentation of scar, contractures, and keloid formation.
A review of the growth of the Israel Genealogy Research Association Database Collection for the last 12 months. Our collection is now passed the 3 million mark and still growing. See which archives have contributed the most. See the different types of records we have, and which years have had records added. You can also see what we have for the future.
How to Fix the Import Error in the Odoo 17Celine George
An import error occurs when a program fails to import a module or library, disrupting its execution. In languages like Python, this issue arises when the specified module cannot be found or accessed, hindering the program's functionality. Resolving import errors is crucial for maintaining smooth software operation and uninterrupted development processes.
Chapter wise All Notes of First year Basic Civil Engineering.pptxDenish Jangid
Chapter wise All Notes of First year Basic Civil Engineering
Syllabus
Chapter-1
Introduction to objective, scope and outcome the subject
Chapter 2
Introduction: Scope and Specialization of Civil Engineering, Role of civil Engineer in Society, Impact of infrastructural development on economy of country.
Chapter 3
Surveying: Object Principles & Types of Surveying; Site Plans, Plans & Maps; Scales & Unit of different Measurements.
Linear Measurements: Instruments used. Linear Measurement by Tape, Ranging out Survey Lines and overcoming Obstructions; Measurements on sloping ground; Tape corrections, conventional symbols. Angular Measurements: Instruments used; Introduction to Compass Surveying, Bearings and Longitude & Latitude of a Line, Introduction to total station.
Levelling: Instrument used Object of levelling, Methods of levelling in brief, and Contour maps.
Chapter 4
Buildings: Selection of site for Buildings, Layout of Building Plan, Types of buildings, Plinth area, carpet area, floor space index, Introduction to building byelaws, concept of sun light & ventilation. Components of Buildings & their functions, Basic concept of R.C.C., Introduction to types of foundation
Chapter 5
Transportation: Introduction to Transportation Engineering; Traffic and Road Safety: Types and Characteristics of Various Modes of Transportation; Various Road Traffic Signs, Causes of Accidents and Road Safety Measures.
Chapter 6
Environmental Engineering: Environmental Pollution, Environmental Acts and Regulations, Functional Concepts of Ecology, Basics of Species, Biodiversity, Ecosystem, Hydrological Cycle; Chemical Cycles: Carbon, Nitrogen & Phosphorus; Energy Flow in Ecosystems.
Water Pollution: Water Quality standards, Introduction to Treatment & Disposal of Waste Water. Reuse and Saving of Water, Rain Water Harvesting. Solid Waste Management: Classification of Solid Waste, Collection, Transportation and Disposal of Solid. Recycling of Solid Waste: Energy Recovery, Sanitary Landfill, On-Site Sanitation. Air & Noise Pollution: Primary and Secondary air pollutants, Harmful effects of Air Pollution, Control of Air Pollution. . Noise Pollution Harmful Effects of noise pollution, control of noise pollution, Global warming & Climate Change, Ozone depletion, Greenhouse effect
Text Books:
1. Palancharmy, Basic Civil Engineering, McGraw Hill publishers.
2. Satheesh Gopi, Basic Civil Engineering, Pearson Publishers.
3. Ketki Rangwala Dalal, Essentials of Civil Engineering, Charotar Publishing House.
4. BCP, Surveying volume 1
This presentation includes basic of PCOS their pathology and treatment and also Ayurveda correlation of PCOS and Ayurvedic line of treatment mentioned in classics.
Reimagining Your Library Space: How to Increase the Vibes in Your Library No ...Diana Rendina
Librarians are leading the way in creating future-ready citizens – now we need to update our spaces to match. In this session, attendees will get inspiration for transforming their library spaces. You’ll learn how to survey students and patrons, create a focus group, and use design thinking to brainstorm ideas for your space. We’ll discuss budget friendly ways to change your space as well as how to find funding. No matter where you’re at, you’ll find ideas for reimagining your space in this session.
A workshop hosted by the South African Journal of Science aimed at postgraduate students and early career researchers with little or no experience in writing and publishing journal articles.
2. Outlines
Difference with classic applications
Classes of multimedia applications
Problems with today’s Internet and
solutions
Common multimedia protocols
Requirements/Constraints
RTP, RTCP
Accessing multimedia data through a web
server
Conclusion
4. Outlines
Difference with classic applications
Classes of multimedia applications
Problems with today’s Internet and
solutions
Common multimedia protocols
Requirements/Constraints
RTP, RTCP
Accessing multimedia data through a web
server
Conclusion
6. Class: Streaming Stored
Audio and Video
The multimedia content has been
prerecorded and stored on a server
User may pause, rewind, forward, etc…
The time between the initial request and
display start can be 1 to 10 seconds
Constraint: after display start, the
playout must be continuous
7. Class: Streaming Live Audio
and Video
Similar to traditional broadcast TV/radio,
but delivery on the Internet
Non-interactive just view/listen
Can not pause or rewind
Often combined with multicast
The time between the initial request and
display start can be up to 10 seconds
Constraint: like stored streaming, after
display start, the playout must be
continuous
8. Class: Real-Time
Interactive Audio and Video
Phone conversation/Video conferencing
Constraint: delay between initial request
and display start must be small
Video: <150 ms acceptable
Audio: <150 ms not perceived, <400 ms
acceptable
Constraint: after display start, the
playout must be continuous
9. Class: Others
Multimedia sharing applications
Download-and-then-play applications
E.g. Napster, Gnutella, Freenet
Distance learning applications
Coordinate video, audio and data
Typically distributed on CDs
10. Outlines
Difference with classic applications
Classes of multimedia applications
Requirements/Constraints
Problems with today’s Internet and
solutions
Common multimedia protocols
RTP, RTCP
Accessing multimedia data through a web
server
Conclusion
11. Challenge
TCP/UDP/IP suite provides best-effort, no
guarantees on expectation or variance of
packet delay
Performance deteriorate if links are
congested (transoceanic)
Most router implementations use only
First-Come-First-Serve (FCFS) packet
processing and transmission scheduling
12. Problems and solutions
Limited bandwidth
Packet Jitter
Solution: Compression
Solution: Fixed/adaptive playout delay
for Audio (example: phone over IP)
Packet loss
Solution: FEC, Interleaving
13. Problem: Limited bandwidth
Intro: Digitalization
Audio
x samples every second (x=frequency)
The value of each sample is rounded to
a finite number of values (for example
256). This is called quantization
Video
Each pixel has a color
Each color has a value
14. Problem: Limited bandwidth
Need for compression
Audio
CD quality: 44100 samples per seconds with
16 bits per sample, stereo sound
44100*16*2 = 1.411 Mbps
For a 3-minute song: 1.441 * 180 = 254 Mb
= 31.75 MB
Video
For 320*240 images with 24-bit colors
320*240*24 = 230KB/image
15 frames/sec: 15*230KB = 3.456MB
3 minutes of video: 3.456*180 = 622MB
15. Audio compression
Several techniques
GSM (13 kbps), G.729(8 kbps), G723.3(6.4
and 5.3kbps)
MPEG 1 layer 3 (also known as MP3)
•
•
•
•
Typical compress rates 96kbps, 128kbps, 160kbps
Very little sound degradation
If file is broken up, each piece is still playable
Complex (psychoacoustic masking, redundancy
reduction, and bit reservoir buffering)
• 3-minute song (128kbps) : 2.8MB
16. Image compression: JPEG
Divide digitized image in 8x8 pixel blocks
Pixel blocks are transformed into
frequency blocks using DCT (Discrete
Cosine Transform). This is similar to FFT
(Fast Fourier Transform)
The quantization phase limits the
precision of the frequency coefficient.
The encoding phase packs this
information in a dense fashion
18. Video compression
Popular techniques
MPEG 1 for CD-ROM quality video
(1.5Mbps)
MPEG 2 for high quality DVD video (3-6
Mbps)
MPEG 4 for object-oriented video
compression
19. Video Compression: MPEG
MPEG uses inter-frame encoding
Three frame types
I
I frame: independent encoding of the frame (JPEG)
P frame: encodes difference relative to I-frame (predicted)
B frame: encodes difference relative to interpolated frame
Note that frames will have different sizes
Complex encoding, e.g. motion of pixel blocks, scene
changes, …
Exploits the similarity between consecutive frames
Decoding is easier then encoding
MPEG often uses fixed-rate encoding
B
B
P
B
B
P
B
B
I
B
B
P
B
B
21. MPEG System Streams
Combine MPEG video and audio streams
in a single synchronized stream
Consists of a hierarchy with meta data at
every level describing the data
System level contains synchronization
information
Video level is organized as a stream of group
of pictures
Group of pictures consists of pictures
Pictures are organized in slices
…
25. Dealing with packet jitter
How does Phone over IP applications
limit the effect of jitter?
A sequence number is added to each
packet
A timestamp is added to each packet
Playout is delayed
27. Dealing with packet jitter
Adaptive playout delay
Objective is to use a value for p-r that
tracks the network delay performance as it
varies during a transfer. The following
formulas are used:
di = (1-u)di-1 + u(ri – ti)
u=0.01 for example
ν i = (1-u)ν i-1 + u|ri-ti-di|
Where
ti is the timestamp of the ith packet (the time pkt i is sent)
ri is the time packet i is received
pi is the time packet i is played
di is an estimate of the average network delay
ν i is an estimate of the average deviation of the delay from
28. Problem: Packet loss
Loss is in a broader sense: packet
never arrives or arrives later than its
scheduled playout time
Since retransmission is
inappropriate for Real Time
applications, FEC or Interleaving are
used to reduce loss impact.
29. Recovering from packet loss
Forward Error Correction
Send redundant encoded chunk every n
chunks (XOR original n chunks)
If 1 packet in this group lost, can reconstruct
If >1 packets lost, cannot recover
Disadvantages
The smaller the group size, the larger the
overhead
Playout delay increased
30. Recovering from packet loss
Piggybacking Lo-fi stream
With one redundant low quality chunk per chunk,
scheme can recover from single packet losses
31. Recovering from packet loss
Interleaving
Divide 20 msec of audio data into smaller units
of 5 msec each and interleave
Upon loss, have a set of partially filled chunks
32. Recovering from packet loss
Receiver-based Repair
The simplest form: Packet repetition
Replaces lost packets with copies of the
packets that arrived immediately before
the loss
A more computationally intensive
form: Interpolation
Uses Audio before and after the loss to
interpolate a suitable packet to cover
the loss
34. Outlines
Difference with classic applications
Classes of multimedia applications
Requirements/Constraints
Problems with today’s Internet and
solutions
Common multimedia protocols
RTP, RTCP
Accessing multimedia data through a web
server
Conclusion
35. Real Time Protocol (RTP)
RTP logically extends UDP
Sits between UDP and application
Implemented as an application library
What does it do?
Framing
Multiplexing
Synchronization
Feedback (RTCP)
36. RTP packet format
Payload Type: 7 bits, providing 128
possible different types of encoding; eg
PCM, MPEG2 video, etc.
Sequence Number: 16 bits; used to
detect packet loss
37. RTP packet format (cont)
Timestamp: 32 bytes; gives the
sampling instant of the first audio/video
byte in the packet; used to remove jitter
introduced by the network
Synchronization Source identifier
(SSRC): 32 bits; an id for the source of a
stream; assigned randomly by the source
38. Timestamp vs. Sequence
No
Timestamps relates packets to real
time
Timestamp value sampled from a
media specific clock
Sequence number relates packets to
other packets
39. Audio silence example
Consider audio data type
What do you want to send during silence?
• Not sending anything
Why might this cause problems?
• Other side needs to distinguish between loss and
silence
Receiver uses Timestamps and sequence No.
to figure out what happened
40. RTP Control Protocol (RTCP)
Used in conjunction with RTP. Used to exchange
control information between the sender and the
receiver.
Three reports are defined: Receiver reception,
Sender, and Source description
Reports contain statistics such as the number of
packets sent, number
of packets lost,
inter-arrival jitter
Typically, limit the
RTCP bandwidth to 5%.
Approximately one
sender report for three
receiver reports
41. Outlines
Difference with classic applications
Classes of multimedia applications
Requirements/Constraints
Problems with today’s Internet and
solutions
Common multimedia protocols
RTP, RTCP
Accessing multimedia data through
a web server
Conclusion
42. Streaming Stored
Multimedia Example
Audio/Video file is segmented and sent
over either TCP or UDP, public
segmentation protocol: Real-Time
Protocol (RTP)
User interactive control is provided, e.g.
the public protocol Real Time
Streaming Protocol (RTSP)
43. Streaming Stored
Multimedia Example
Helper Application: displays content,
which is typically requested via a Web
browser; e.g. RealPlayer; typical
functions:
Decompression
Jitter removal
Error correction: use redundant packets to be
used for reconstruction of original stream
GUI for user control
44. Streaming from Web
Servers
Audio: in files sent as HTTP objects
Video (interleaved audio and images in one file,
or two separate files and client synchronizes the
display) sent as HTTP object(s)
A simple architecture is to have the Browser
request the object(s)
and after their
reception pass
them to the player
for display
- No pipelining
45. Streaming from a Web
Server (cont)
Alternative: set up connection between
server and player, then download
Web browser requests and receives a
Meta File
(a file describing the object) instead of
receiving the file itself;
Browser launches the appropriate Player
and passes it the Meta File;
Player sets up a TCP connection with a
streaming server Server and downloads
the file
47. Options when using a
streaming server
Use UDP, and Server sends at a rate (Compression and
Transmission) appropriate for client; to reduce jitter,
Player buffers initially for 2-5 seconds, then starts display
Use TCP, and sender sends at maximum possible rate
under TCP; retransmit when error is encountered; Player
uses a much large buffer to smooth delivery rate of TCP
48. Real Time Streaming
Protocol (RTSP)
For user to control display: rewind, fast forward,
pause, resume, etc…
Out-of-band protocol (uses two connections, one
for control messages (Port 554) and one for
media stream)
RFC 2326 permits use of either TCP or UDP for
the control messages connection, sometimes
called the RTSP Channel
As before, meta file is communicated to web
browser which then launches the Player; Player
sets up an RTSP connection for control messages
in addition to the connection for the streaming
media
51. Outlines
Difference with classic applications
Classes of multimedia applications
Requirements/Constraints
Problems with today’s Internet and
solutions
Common multimedia protocols
RTP, RTCP
Accessing multimedia data through a web
server
Conclusion
52. Conclusion
None of the proposed solutions give a real
guarantee to the user that multimedia data will
arrive on time.
Couldn’t we reserve some bandwidth for our
multimedia transfer?