This document discusses various methods of data compression. It begins by defining compression as reducing the size of data while retaining its meaning. There are two main types of compression: lossless and lossy. Lossless compression allows for perfect reconstruction of the original data by removing redundant data. Common lossless methods include run-length encoding and Huffman coding. Lossy compression is used for images and video, and results in some loss of information. Popular lossy schemes are JPEG, MPEG, and MP3. The document then proceeds to describe several specific compression algorithms and their encoding and decoding processes.
The document discusses video compression techniques. It describes video compression as removing repetitive images, sounds, and scenes to reduce file size. There are two types: lossy compression which removes unnecessary data, and lossless compression which compresses without data loss. Common techniques involve predicting frames, exploiting temporal and spatial redundancies, and standards like MPEG. Applications include cable TV, video conferencing, storage media. Advantages are reduced file sizes and faster transfer, while disadvantages are recompilation needs and potential transmission errors.
In computer science and information theory, data compression, source coding,[1] or bit-rate reduction involves encoding information using fewer bits than the original representation.[2] Compression can be either lossy or lossless. Lossless compression reduces bits by identifying and eliminating statistical redundancy. No information is lost in lossless compression.
The document discusses audio compression techniques. It begins with an introduction to pulse code modulation (PCM) and then describes μ-law and A-law compression standards which compress audio using companding algorithms. It also covers differential PCM and adaptive differential PCM (ADPCM) techniques. The document then discusses the MPEG audio compression standard, including its encoder architecture, three layer standards (Layers I, II, III), and applications. It concludes with a comparison of various MPEG audio compression standards and references.
The document provides an overview of Huffman coding, a lossless data compression algorithm. It begins with a simple example to illustrate the basic idea of assigning shorter codes to more frequent symbols. It then defines key terms like entropy and describes the Huffman coding algorithm, which constructs an optimal prefix code from the frequency of symbols in the data. The document discusses how Huffman coding can be applied to image compression by first predicting pixel values and then encoding the residuals. It notes some disadvantages of Huffman coding and describes variations like adaptive Huffman coding.
Comparison between JPEG(DCT) and JPEG 2000(DWT) compression standardsRishab2612
This topic comes under the Image Processing.In this comparison between JPEG and JPEG 2000 compression standard techniques is made.The PPT comprises of results, analysis and conclusion along with the relevant outputs
This is the subject slides for the module MMS2401 - Multimedia System and Communication taught in Shepherd College of Media Technology, Affiliated with Purbanchal University.
This document discusses various methods of data compression. It begins by defining compression as reducing the size of data while retaining its meaning. There are two main types of compression: lossless and lossy. Lossless compression allows for perfect reconstruction of the original data by removing redundant data. Common lossless methods include run-length encoding and Huffman coding. Lossy compression is used for images and video, and results in some loss of information. Popular lossy schemes are JPEG, MPEG, and MP3. The document then proceeds to describe several specific compression algorithms and their encoding and decoding processes.
The document discusses video compression techniques. It describes video compression as removing repetitive images, sounds, and scenes to reduce file size. There are two types: lossy compression which removes unnecessary data, and lossless compression which compresses without data loss. Common techniques involve predicting frames, exploiting temporal and spatial redundancies, and standards like MPEG. Applications include cable TV, video conferencing, storage media. Advantages are reduced file sizes and faster transfer, while disadvantages are recompilation needs and potential transmission errors.
In computer science and information theory, data compression, source coding,[1] or bit-rate reduction involves encoding information using fewer bits than the original representation.[2] Compression can be either lossy or lossless. Lossless compression reduces bits by identifying and eliminating statistical redundancy. No information is lost in lossless compression.
The document discusses audio compression techniques. It begins with an introduction to pulse code modulation (PCM) and then describes μ-law and A-law compression standards which compress audio using companding algorithms. It also covers differential PCM and adaptive differential PCM (ADPCM) techniques. The document then discusses the MPEG audio compression standard, including its encoder architecture, three layer standards (Layers I, II, III), and applications. It concludes with a comparison of various MPEG audio compression standards and references.
The document provides an overview of Huffman coding, a lossless data compression algorithm. It begins with a simple example to illustrate the basic idea of assigning shorter codes to more frequent symbols. It then defines key terms like entropy and describes the Huffman coding algorithm, which constructs an optimal prefix code from the frequency of symbols in the data. The document discusses how Huffman coding can be applied to image compression by first predicting pixel values and then encoding the residuals. It notes some disadvantages of Huffman coding and describes variations like adaptive Huffman coding.
Comparison between JPEG(DCT) and JPEG 2000(DWT) compression standardsRishab2612
This topic comes under the Image Processing.In this comparison between JPEG and JPEG 2000 compression standard techniques is made.The PPT comprises of results, analysis and conclusion along with the relevant outputs
This is the subject slides for the module MMS2401 - Multimedia System and Communication taught in Shepherd College of Media Technology, Affiliated with Purbanchal University.
What is Video Compression?, Introduction of Video Compression. Motivation, Working Methodology of Video Compression., Example, Applications, Needs of Video Compression, Advantages & Disadvantages
The document discusses video compression basics and MPEG-2 video compression. It explains that video frames contain redundant spatial and temporal data that can be compressed. MPEG-2 uses three frame types (I, P, B frames) and compresses frames using intra-frame and inter-frame encoding techniques like DCT, quantization, and entropy encoding to remove redundancy. The encoding process transforms raw video frames to compressed bitstreams for efficient storage and transmission.
Types of Data compression, Lossy Compression, Lossless compression and many more. How data is compressed etc. A little extensive than CIE O level Syllabus
This document provides an overview of a research project on image compression. It discusses image compression techniques including lossy and lossless compression. It describes using discrete wavelet transform, lifting wavelet transform, and stationary wavelet transform for image transformation. Experiments were conducted to compare the compression ratio and processing time of different combinations of wavelet transforms, vector quantization, and Huffman/Arithmetic coding. The results were analyzed to evaluate the compression performance and efficiency of the different methods.
This slide gives you the basic understanding of digital image compression.
Please Note: This is a class teaching PPT, more and detail topics were covered in the classroom.
This white paper discusses various video compression techniques and standards. It explains that JPEG is used for still images while MPEG is used for video. The two main early standards were JPEG and MPEG-1. Later standards like MPEG-2, MPEG-4, and H.264 provided improved compression ratios and capabilities. Key techniques discussed include lossy compression, comparing adjacent frames to reduce redundant data, and balancing compression ratio with image quality and latency considerations for different applications like surveillance video.
This document provides an overview of audio compression technologies. It discusses what audio is, why compression is needed, and the main types of audio compression: lossy and lossless. It describes some standard codecs for each type including MP3, AAC, FLAC. It explains the MPEG audio encoding and decoding process, and notes that AAC is the successor to MP3. In summary, the document covers audio fundamentals and provides details on common audio compression standards and techniques.
Computer Science (A Level) discusses data compression techniques. Compression reduces the number of bits required to represent data to save disk space and increase transfer speeds. There are two main types of compression: lossy compression, which permanently removes non-essential data and can reduce quality, and lossless compression, which identifies patterns to compress data without any loss. Common lossy techniques are JPEG, MPEG, and MP3, while common lossless techniques are run length encoding and dictionary encoding.
Codec stands for enCOder/DECoder or COmpressor/DECompressor. It is a software or hardware that compresses and decompresses audio and video data streams.
This document discusses various data compression techniques. It begins by explaining why data compression is useful for optimizing storage space and transmission times. It then covers the concepts of entropy and lossless versus lossy compression methods. Specific lossless methods discussed include run-length encoding, Huffman coding, and Lempel-Ziv encoding. Lossy methods covered are JPEG for images, MPEG for video, and MP3 for audio. Key steps of each technique are outlined at a high level.
This document summarizes MPEG 1 and 2 video compression standards. It explains the need for video compression due to the large data rates of uncompressed video like HDTV. MPEG compression works by predicting frames from previous frames using motion compensation and coding the residual errors. It uses I, P, and B frames along with other techniques like chroma subsampling to achieve high compression ratios like 83:1 while maintaining quality. MPEG-2 improved upon MPEG-1 by supporting higher resolutions and bitrates needed for digital television.
This document discusses data compression algorithms including lossless and lossy methods. It defines lossless compression as allowing perfect reconstruction of the original data and lossy compression as permitting only approximate reconstruction. Specific lossless methods covered are run-length encoding, Huffman coding, and Lempel-Ziv encoding. Lossy methods discussed are JPEG compression for images, discrete cosine transform, and MPEG video compression. The document concludes that the presented approach of using the Hartley transform for image compression with separate magnitude and phase processing achieved good performance.
This document discusses information theory and data compression. It covers three main topics:
1) Types of data compression including lossless compression, which retains all original data, and lossy compression, which permanently eliminates some information.
2) Compression methods like removing spaces, using single characters to represent repeated characters, and substituting smaller bit sequences for recurring characters.
3) Continuous amplitude signals, which are varying quantities over a continuum like time, and examples of finite vs infinite duration signals.
This document discusses multimedia compression. It begins by explaining why compression is needed due to the large size of raw audio and video data. It then outlines an overview of generic compression algorithms and content-specific compression techniques. It discusses lossy compression and introduces common lossless compression algorithms like Huffman coding and Arithmetic coding. Finally, it explains how content-specific compression aims to further reduce redundancy by de-correlating audio, images, and video based on properties like temporal, channel, color space, and spatial correlations.
Compression: Video Compression (MPEG and others)danishrafiq
This document provides an overview of video compression techniques used in standards like MPEG and H.261. It discusses how uncompressed video data requires huge storage and bandwidth that compression aims to address. It explains that lossy compression methods are needed to achieve sufficient compression ratios. The key techniques discussed are intra-frame coding using DCT and quantization similar to JPEG, and inter-frame coding using motion estimation and compensation to remove temporal redundancy between frames. Motion vectors are found using techniques like block matching and sum of absolute differences. MPEG and other standards use a combination of these intra and inter-frame coding techniques to efficiently compress video for storage and transmission.
This document provides an overview of data compression techniques. It discusses how data compression reduces the number of bits needed to represent data, saving storage space and transmission bandwidth. It describes lossy compression methods like JPEG and MPEG that eliminate redundant information, resulting in smaller file sizes but some loss of data quality. Lossless compression methods like ZIP and GIF are also covered, which compress data without any loss for file types like text where quality is important. Specific lossless compression techniques like run length encoding, Huffman coding, Lempel-Ziv coding are explained. The document concludes with a brief mention of image, video, audio and dictionary based compression methods.
This slide Pack covers the 6 internal components of a computer system. These components play a very important part in the fast running of a computer system.
This document discusses rendering algorithms and techniques. It begins by defining rendering as the process of generating 2D or 3D images from 3D models. There are two main categories of rendering: real-time rendering used for interactive graphics, and pre-rendering used where image quality is prioritized over speed. The three main computational techniques are ray casting, ray tracing, and shading. Ray tracing simulates physically accurate lighting by tracing the path of light rays. Shading determines an object's shade based on attributes like diffuse illumination and light source contributions.
This document discusses data compression techniques. It begins by defining data compression as encoding information in a file to take up less space. It then covers the need for compression to save storage and transmission time. The main types of compression discussed are lossless, which allows exact reconstruction of data, and lossy, which allows approximate reconstruction for better compression. Specific lossless techniques covered include Huffman coding, which assigns variable length codes based on frequency. Lossy techniques like JPEG are also discussed. The document concludes by listing applications of compression techniques in files, multimedia, and communication.
This document provides an overview of different data compression techniques. It begins by introducing the concepts of lossless compression methods like run-length encoding and Huffman coding. It then discusses the Lempel-Ziv encoding algorithm. The document concludes by mentioning lossy compression standards for images, video, and audio, such as JPEG, MPEG, and MP3. Diagrams and examples are provided to illustrate how various compression algorithms work.
Data compression techniques aim to optimize storage and transmission of data by removing redundant information. Lossless compression methods like Run-Length Encoding and Huffman Coding replace repetitive patterns with codes to reduce file size without losing any original data. Lossy methods like JPEG, MPEG, and MP3 are commonly used for images, video, and audio as they allow for smaller file sizes by discarding imperceptible data. These methods break files into blocks, apply transforms to reveal redundancies, then quantize and encode the data.
What is Video Compression?, Introduction of Video Compression. Motivation, Working Methodology of Video Compression., Example, Applications, Needs of Video Compression, Advantages & Disadvantages
The document discusses video compression basics and MPEG-2 video compression. It explains that video frames contain redundant spatial and temporal data that can be compressed. MPEG-2 uses three frame types (I, P, B frames) and compresses frames using intra-frame and inter-frame encoding techniques like DCT, quantization, and entropy encoding to remove redundancy. The encoding process transforms raw video frames to compressed bitstreams for efficient storage and transmission.
Types of Data compression, Lossy Compression, Lossless compression and many more. How data is compressed etc. A little extensive than CIE O level Syllabus
This document provides an overview of a research project on image compression. It discusses image compression techniques including lossy and lossless compression. It describes using discrete wavelet transform, lifting wavelet transform, and stationary wavelet transform for image transformation. Experiments were conducted to compare the compression ratio and processing time of different combinations of wavelet transforms, vector quantization, and Huffman/Arithmetic coding. The results were analyzed to evaluate the compression performance and efficiency of the different methods.
This slide gives you the basic understanding of digital image compression.
Please Note: This is a class teaching PPT, more and detail topics were covered in the classroom.
This white paper discusses various video compression techniques and standards. It explains that JPEG is used for still images while MPEG is used for video. The two main early standards were JPEG and MPEG-1. Later standards like MPEG-2, MPEG-4, and H.264 provided improved compression ratios and capabilities. Key techniques discussed include lossy compression, comparing adjacent frames to reduce redundant data, and balancing compression ratio with image quality and latency considerations for different applications like surveillance video.
This document provides an overview of audio compression technologies. It discusses what audio is, why compression is needed, and the main types of audio compression: lossy and lossless. It describes some standard codecs for each type including MP3, AAC, FLAC. It explains the MPEG audio encoding and decoding process, and notes that AAC is the successor to MP3. In summary, the document covers audio fundamentals and provides details on common audio compression standards and techniques.
Computer Science (A Level) discusses data compression techniques. Compression reduces the number of bits required to represent data to save disk space and increase transfer speeds. There are two main types of compression: lossy compression, which permanently removes non-essential data and can reduce quality, and lossless compression, which identifies patterns to compress data without any loss. Common lossy techniques are JPEG, MPEG, and MP3, while common lossless techniques are run length encoding and dictionary encoding.
Codec stands for enCOder/DECoder or COmpressor/DECompressor. It is a software or hardware that compresses and decompresses audio and video data streams.
This document discusses various data compression techniques. It begins by explaining why data compression is useful for optimizing storage space and transmission times. It then covers the concepts of entropy and lossless versus lossy compression methods. Specific lossless methods discussed include run-length encoding, Huffman coding, and Lempel-Ziv encoding. Lossy methods covered are JPEG for images, MPEG for video, and MP3 for audio. Key steps of each technique are outlined at a high level.
This document summarizes MPEG 1 and 2 video compression standards. It explains the need for video compression due to the large data rates of uncompressed video like HDTV. MPEG compression works by predicting frames from previous frames using motion compensation and coding the residual errors. It uses I, P, and B frames along with other techniques like chroma subsampling to achieve high compression ratios like 83:1 while maintaining quality. MPEG-2 improved upon MPEG-1 by supporting higher resolutions and bitrates needed for digital television.
This document discusses data compression algorithms including lossless and lossy methods. It defines lossless compression as allowing perfect reconstruction of the original data and lossy compression as permitting only approximate reconstruction. Specific lossless methods covered are run-length encoding, Huffman coding, and Lempel-Ziv encoding. Lossy methods discussed are JPEG compression for images, discrete cosine transform, and MPEG video compression. The document concludes that the presented approach of using the Hartley transform for image compression with separate magnitude and phase processing achieved good performance.
This document discusses information theory and data compression. It covers three main topics:
1) Types of data compression including lossless compression, which retains all original data, and lossy compression, which permanently eliminates some information.
2) Compression methods like removing spaces, using single characters to represent repeated characters, and substituting smaller bit sequences for recurring characters.
3) Continuous amplitude signals, which are varying quantities over a continuum like time, and examples of finite vs infinite duration signals.
This document discusses multimedia compression. It begins by explaining why compression is needed due to the large size of raw audio and video data. It then outlines an overview of generic compression algorithms and content-specific compression techniques. It discusses lossy compression and introduces common lossless compression algorithms like Huffman coding and Arithmetic coding. Finally, it explains how content-specific compression aims to further reduce redundancy by de-correlating audio, images, and video based on properties like temporal, channel, color space, and spatial correlations.
Compression: Video Compression (MPEG and others)danishrafiq
This document provides an overview of video compression techniques used in standards like MPEG and H.261. It discusses how uncompressed video data requires huge storage and bandwidth that compression aims to address. It explains that lossy compression methods are needed to achieve sufficient compression ratios. The key techniques discussed are intra-frame coding using DCT and quantization similar to JPEG, and inter-frame coding using motion estimation and compensation to remove temporal redundancy between frames. Motion vectors are found using techniques like block matching and sum of absolute differences. MPEG and other standards use a combination of these intra and inter-frame coding techniques to efficiently compress video for storage and transmission.
This document provides an overview of data compression techniques. It discusses how data compression reduces the number of bits needed to represent data, saving storage space and transmission bandwidth. It describes lossy compression methods like JPEG and MPEG that eliminate redundant information, resulting in smaller file sizes but some loss of data quality. Lossless compression methods like ZIP and GIF are also covered, which compress data without any loss for file types like text where quality is important. Specific lossless compression techniques like run length encoding, Huffman coding, Lempel-Ziv coding are explained. The document concludes with a brief mention of image, video, audio and dictionary based compression methods.
This slide Pack covers the 6 internal components of a computer system. These components play a very important part in the fast running of a computer system.
This document discusses rendering algorithms and techniques. It begins by defining rendering as the process of generating 2D or 3D images from 3D models. There are two main categories of rendering: real-time rendering used for interactive graphics, and pre-rendering used where image quality is prioritized over speed. The three main computational techniques are ray casting, ray tracing, and shading. Ray tracing simulates physically accurate lighting by tracing the path of light rays. Shading determines an object's shade based on attributes like diffuse illumination and light source contributions.
This document discusses data compression techniques. It begins by defining data compression as encoding information in a file to take up less space. It then covers the need for compression to save storage and transmission time. The main types of compression discussed are lossless, which allows exact reconstruction of data, and lossy, which allows approximate reconstruction for better compression. Specific lossless techniques covered include Huffman coding, which assigns variable length codes based on frequency. Lossy techniques like JPEG are also discussed. The document concludes by listing applications of compression techniques in files, multimedia, and communication.
This document provides an overview of different data compression techniques. It begins by introducing the concepts of lossless compression methods like run-length encoding and Huffman coding. It then discusses the Lempel-Ziv encoding algorithm. The document concludes by mentioning lossy compression standards for images, video, and audio, such as JPEG, MPEG, and MP3. Diagrams and examples are provided to illustrate how various compression algorithms work.
Data compression techniques aim to optimize storage and transmission of data by removing redundant information. Lossless compression methods like Run-Length Encoding and Huffman Coding replace repetitive patterns with codes to reduce file size without losing any original data. Lossy methods like JPEG, MPEG, and MP3 are commonly used for images, video, and audio as they allow for smaller file sizes by discarding imperceptible data. These methods break files into blocks, apply transforms to reveal redundancies, then quantize and encode the data.
This document summarizes several source coding techniques: Arithmetic coding encodes a message into a single floating point number between 0 and 1. Lempel-Ziv coding builds a dictionary to encode repeated patterns. Run length encoding replaces repeated characters with a code indicating the character and number of repeats. Rate distortion theory calculates the minimum bit rate needed for a given source and distortion. The entropy rate measures how entropy grows with the length of a stochastic process. JPEG uses lossy compression including discrete cosine transform and quantization to discard high frequency data imperceptible to humans.
This document discusses various image and graphic formats. It begins by explaining colour perception and the primary colours used in additive and subtractive colour models. It then summarizes several common file formats like JPEG, GIF, PNG, and BMP, noting what types of images each is best suited for in terms of things like smoothness, transparency, and lossy/lossless compression. Vector formats like SVG and Flash are also briefly covered. The document concludes by discussing topics like image size, resolution, compression techniques, and ethical considerations around image use.
This document discusses various techniques for lossless data compression, including run-length coding, Huffman coding, adaptive Huffman coding, arithmetic coding, and Shannon-Fano coding. It provides details on how each technique works, such as assigning shorter codes to more frequent symbols in Huffman coding and dynamically updating codes based on the data stream in adaptive Huffman coding. The document also discusses the importance of compression techniques for reducing the number of bits needed to store or transmit data.
Data compression reduces the size of data files by removing redundant information while preserving the essential content. It aims to reduce storage space and transmission times. There are two main types of compression: lossless, which preserves all original data, and lossy, which sacrifices some quality for higher compression ratios. Common lossless methods are run-length encoding, Huffman coding, and Lempel-Ziv encoding, while lossy methods include JPEG, MPEG, and MP3.
LZW coding is a lossless compression technique that removes spatial redundancies in images. It works by assigning variable length code words to sequences of input symbols using a dictionary. As the dictionary grows, longer matches are encoded, improving compression ratios. LZW compression is fast, simple to implement, and effective for images with repeating patterns, making it widely used in formats like GIF and TIFF [END SUMMARY]
Comparison of various data compression techniques and it perfectly differentiates different techniques of data compression. Its likely to be precise and focused on techniques rather than the topic itself.
1. The document discusses different compression techniques for text, audio, images, and video.
2. It provides examples of compression ratios achieved using lossy and lossless compression methods. For example, text compression can achieve 3:1 ratios using Lempel-Ziv coding while audio compression can achieve ratios between 3:1 to 24:1 using MP3.
3. The techniques discussed include entropy encoding, run-length encoding, Huffman coding, discrete cosine transforms, and differential encoding which takes advantage of redundancies in the data. The best approach depends on the type of data and acceptable quality.
JPEG compression involves four key steps:
1) Applying the discrete cosine transform (DCT) to 8x8 pixel blocks, transforming spatial information to frequency information.
2) Quantizing the transformed coefficients, discarding less important high-frequency information to reduce file size.
3) Scanning coefficients in zigzag order to group similar frequencies together, further compressing the data.
4) Entropy encoding the output, typically using Huffman coding, to remove statistical redundancy and achieve further compression.
This document discusses data representation, data compression, and encryption. It begins by defining data representation and describing how computers store different types of data like numbers, text, graphics, and sound. It then discusses several data representation methods and formats like ASCII, EBCDIC, and ASN.1. The document also covers data compression techniques including lossless and lossy compression. Common audio, video, and image compression formats and their properties are described. Finally, the document provides an overview of encryption, describing the encryption and decryption process and some basic encryption concepts and techniques.
This document provides an overview of image compression techniques. It defines key concepts like pixels, image resolution, and types of images. It then explains the need for compression to reduce file sizes and transmission times. The main compression methods discussed are lossless techniques like run-length encoding and Huffman coding, as well as lossy methods for images (JPEG) and video (MPEG) that remove redundant data. Applications of image compression include transmitting images over the internet faster and storing more photos on devices.
This document provides an overview of data compression techniques. It discusses lossless compression algorithms like Huffman encoding and LZW encoding which allow for exact reconstruction of the original data. It also discusses lossy compression techniques like JPEG and MPEG which allow for approximate reconstruction for images and video in order to achieve higher compression rates. JPEG divides images into 8x8 blocks and applies discrete cosine transform, quantization, and run length encoding. MPEG spatially compresses each video frame using JPEG and temporally compresses frames by removing redundant frames.
The document discusses various data compression techniques. It defines data compression as the process of reducing the size of data through the use of compression algorithms. There are two main types of compression: lossless, where the original and decompressed data are identical, and lossy, where some data may be lost during compression. Common lossless techniques mentioned include run-length encoding, Huffman coding, and Lempel-Ziv encoding, while lossy methods discussed are JPEG for images and MPEG for video. The document also provides examples to illustrate how several of these compression algorithms work.
The document discusses various data compression techniques. It defines data compression as the process of reducing the size of data through the use of compression algorithms. There are two main types of compression: lossless, where the original and decompressed data are identical, and lossy, where some data may be lost during compression. Common lossless techniques mentioned include run-length encoding, Huffman coding, and Lempel-Ziv encoding, while lossy methods discussed are JPEG for images and MPEG for video. The document also provides examples to illustrate how several of these compression algorithms work.
This document discusses data compression techniques. It begins with an introduction to data compression, explaining that it reduces file sizes by identifying repetitive patterns in data. It then discusses some common questions around data compression, its major steps, types including lossless and lossy compression, and some examples like Huffman coding and LZ-77 encoding. The document provides details on these techniques through examples and diagrams.
The document discusses JPEG image compression, which involves lossy compression. It describes the major steps in JPEG coding as: transforming RGB to YIQ/YUV color space and subsampling color; applying discrete cosine transformation (DCT); quantization; zig-zag ordering; DPCM on DC component; run-length encoding; and entropy coding like Huffman coding. Quantization is the main source of loss in JPEG compression, where each DCT value is divided by a quantization value and rounded down.
There are two categories of data compression methods: lossless and lossy. Lossless methods preserve the integrity of the data by using compression and decompression algorithms that are exact inverses, while lossy methods allow for data loss. Common lossless methods include run-length encoding and Huffman coding, while lossy methods like JPEG, MPEG, and MP3 are used to compress images, video, and audio by removing imperceptible or redundant data.
The document discusses various techniques for image compression, including lossless and lossy methods. For lossless compression, it describes predictive coding techniques that remove inter-pixel redundancy such as delta modulation. It also covers entropy encoding schemes like Huffman coding and LZW coding. For lossy compression, it discusses the discrete cosine transform used in the JPEG standard, where higher frequency coefficients are quantized more coarsely to remove information. Zig-zag ordering is used before entropy coding the quantized DCT coefficients.
The document is a term paper on image compression submitted by a student. It discusses various topics related to image compression including:
- Introduction to image compression and its goals of reducing redundancy and storing/transmitting data efficiently.
- Methods to reduce correlation between pixels like predictive coding, orthogonal transforms, and subband coding.
- Quantization in image compression and its role in reducing less important high frequency components.
- Entropy coding techniques used in JPEG like differential coding, run-length coding, and Huffman coding to further compress the data.
Similar to Introduction Data Compression/ Data compression, modelling and coding,Image Compression (20)
The Future Agriculture Industry with Artificial
Intelligence (AI) based TechnologyAuthor: Umakant Bhaskar Gohatre
AI (Artificial Intelligence) in agriculture refers to the use of advanced technologies and algorithms to
improve various aspects of farming and crop management. It involves leveraging data, machine
learning, computer vision, and other AI techniques to optimize agricultural processes, increase
productivity, reduce resource waste, and make informed decisions. Here are some key applications of
AI in agriculture
1. Crop Monitoring and Disease Detection:
• AI can analyze data from various sources such as satellite imagery, drones, and IoT
sensors to monitor crop health and detect diseases, pests, or nutrient deficiencies.
• Machine learning algorithms can process large datasets to identify patterns and
indicators of crop stress, enabling early intervention and targeted treatment.
2. Precision Farming:
• AI-based systems can collect and analyze data on soil conditions, weather patterns, and
historical crop performance to optimize planting, irrigation, fertilization, and pesticide
application.
• By integrating data from sensors, drones, and other devices, AI can provide real-time
recommendations for precise and site-specific farming practices, reducing costs and
environmental impact.
3. Autonomous Farming:
• Powered robots and autonomous vehicles can perform tasks such as planting,
harvesting, and weeding with precision and efficiency.
• Computer vision algorithms enable machines to identify and differentiate between crops,
weeds, and other objects in the field, allowing for targeted actions and selective
treatments.
4. Yield Prediction and Forecasting:
• By analysing historical data, weather patterns, and other factors, AI algorithms can
predict crop yields, helping farmers make better decisions regarding planting schedules,
resource allocation, and market planning.
5. Livestock Monitoring
• AI-based systems can monitor the health and behavior of livestock using computer
vision and sensor technologies.
• Facial recognition and behavioral analysis can identify individual animals, detect signs
of distress or illness, and optimize feeding and breeding programs.
6. Supply Chain Optimization:
• AI can optimize the logistics and supply chain management in agriculture, improving
traceability, reducing waste, and enhancing efficiency.
• Predictive analytics can help anticipate market demand, optimize transportation routes,
and streamline inventory management.
7. Decision Support Systems:
• AI can provide farmers with decision support tools, analyzing complex data and
providing actionable insights.
• By considering multiple factors such as weather, market prices, and historical data, AI
systems can recommend optimal planting strategies, input management, and risk
mitigation measures.
The integration of AI in agriculture holds great potential to address the challenges faced by the industry,
CMOS (complementary metal oxide semiconductor) technology continues to be the
dominant technology for fabricating integrated circuits (ICs or chips). This dominance
will likely continue for the next 25 years and perhaps even longer. Why? CMOS
technology is reliable, manufacturable, low power, low cost, and, perhaps most
importantly, scalable. The fact that silicon integrated circuit technology is scalable was
observed and described in 1965 by Intel founder Gordon Moore. His observations are
now referred to as Moore's law and state that the number of devices on a chip will double
every 18 to 24 months. While originally not specific to CMOS, Moore's law has been
fulfilled over the years by scaling down the feature size in CMOS technology. Whereas
the gate lengths of early CMOS transistors were in the micrometer range (long-channel
devices) the feature sizes of current CMOS devices are in the nanometer range
(short-channel devices).
To encompass both the long- and short-channel CMOS technologies in this book,
a two-path approach to custom CMOS integrated circuit design is adopted. Design
techniques are developed for both and then compared. This comparison gives readers
deep insight into the circuit design process. While the square-law equations used to
describe MOSFET operation that students learn in an introductory course in
microelectronics can be used for analog design in a long-channel CMOS process they are
not useful when designing in short-channel, or nanometer, CMOS technology. The
behavior of the devices in a nanometer CMOS process is quite complex. Simple
equations to describe the devices' behavior are not possible. Rather electrical plots are
used to estimate biasing points and operating behavior. It is still useful, however, for the
student to use mathematical rigor when learning circuit analysis and design and, hence,
the reason for the two-path approach. Hand calculations can be performed using a
long-channel CMOS technology with the results then used to describe how to design in a
nano-CMOS process.
Question Bank: Network Management in TelecommunicationModule 1: Introduction of Network Management
1] What problem would you expect the NMS to resolve, and how?
2] What are the Challenges of IT Manager?
3] Explain trouble ticket administration?
4] Draw Network Management Dumb bell architecture.
5] In absence of a sophisticated NMS, explain the methods used for Management in an
organization.
6] List some common network problems.
7] Why NMS is required?
8] Draw the Dumbbell architecture for Network management.
9] What is the goal of Network Management?
The document provides an introduction to cyber law and the concept of cyberspace. It discusses how cyberspace has evolved from connecting people to information in 1990 to connecting everything today. It defines cyberspace as the electronic medium of computer networks used for online communication. Cyberspace includes the internet, websites, computers, networks, software, and electronic devices. The document outlines some key characteristics of cyberspace such as its intangible nature, lack of respect for national boundaries, open participation, and potential for anonymity. It also discusses various cyber crimes and penalties under the IT Act.
LRU Replacement Policy
The Least Recently Used replacement policy chooses to
replace the page which has not been referenced for the longest time.
This policy assumes the recent past will approximate the immediate
future. The operating system keeps track of when each page was
referenced by recording the time of reference or by maintaining a
stack of references
Table of Contents
Copyright
Endorsements
Preface
About the Author
Plitt J: Background
Oiapt« 1. Data Communications and Networl< Management Overview
Section 1.1. Analogy cl Telephone Network Management
Section 1.2. Data (Computer) and Telea>mmunicatlon Network
Section 1.3. Distributed CCmputing Environment
Section 1.4. TCP/IP-Based Networks: Internet and Intranet
Section 1.5. Communication Protocols and Standards
Section 1.6. Networks, Systems, and Services
Section 1.7. case Histories on Network, System, ard Servics Management
Section 1.8. Challenges of IT Managers
Section 1.9. Network Management : Goals, Organization, and Functions
Section 1.10. Network Management Architecture and Organization
Section 1.11. Network Management Perspectives
Section 1.1.2. NMS Platform
Section 1.13. CUrrent Status and Future cl Network Mana{jement
Summary
Exercises
Chapter 2. Review of Information Nelwor1< and Technolog
Euler's method is a numerical method to solve first order first degree differential equation with a given initial value. It is the most basic explicit method for numerical integration of ordinary differential equations and is the simplest Runge–Kutta method.
The Euler method is a first-order method, which means that the local error (error per step) is proportional to the square of the step size, and the global error (error at a given time) is proportional to the step size. The Euler method often serves as the basis to construct more complex methods, e.g., predictor–corrector method
Mini Project fo BE Engineering students Smt. Indira Gandhi College of Engineering,
Navi Mumbai
Department of Electronics and Telecommunication Engineering
Subject In charge: Umakant Bhaskar Gohatre, Assistant Professor
List of Mini Projects:
• Water Level Controller using 8051 Microcontroller: Here we are designing the
circuit which is used to detect and control the water level automatically in overhead
tank using 8051 microcontroller. It is used in industries to control the liquid level
automatically.
• Password Based Door Lock System using 8051 Microcontroller: This system
demonstrates a password based door lock system wherein once the correct code or
password is entered, the door is opened and the concerned person is allowed access to
the secured area. After some time, the door would be closed. Read this post completely
to get more information.
• Car Parking Guard Circuit Using Infrared Sensor: This circuit helps the person in
the driving seat in such a way that it gives an alarm if there is any obstacle or a wall
while parking or while driving in reverse. It is very useful in our real life.
• Battery Charger Circuit Using SCR: Here is the circuit diagram of battery charger
circuit using Silicon Controlled Rectifier. SCR can be used in half wave rectifier, full
wave rectifier, inverter circuits, power control circuits, etc.
• Interfacing GPS with 8051 Microcontroller: In this interfacing of GPS with 8051
circuit, GPS module calculates the position by reading the signals that are transmitted
by satellites.
• Speed Control of DC Motor Using Pulse Width Modulation: This pulse width
modulation technique is the more efficient way to proceed to manage the speed of our
DC motor manually.
• • Metal Detector Circuit: This is a simple metal detector circuit which is very useful
for checking the person in shopping malls, hotels, cinema halls to ensure that person is
not carrying any explosive metals or illegal things like guns, bombs etc.
Mini Project for Engineering Students BE or Btech Engineering students Smt. Indira Gandhi College of Engineering,
Navi Mumbai
Department of Electronics and Telecommunication Engineering
Subject In charge: Umakant Bhaskar Gohatre, Assistant Professor
List of Mini Projects:
• Fire Alarm with Siren Sound: This circuit alerts us when there is a fire accident at home
by ringing a siren sound instead of a buzzer.
• Mosquito Repellent Circuit: Here is the simple electronic mosquito repellent circuit which
can produce ultrasound in the frequency range of 20-38 kHz, which can scare away mosquitoes.
• Electronic Letter Box: This is a simple circuit which helps in finding out any letter dropped
in our box by stopping the LED lights attached in this circuit.
• Dummy Alarm Circuit: The main principle of the circuit is to flash an LED for every 5
seconds. The circuit consists of 7555 timer IC as main component.
• DTMF Based Home Automation System Circuit: This is a simple and very useful circuit
in our real life named DTMF controlled home appliances system. It helps to control the home
appliances using DTMF technology.
• Ding Dong Sound Generator Circuit: This is ding dong sound generator circuit is designed
using 555 timer IC in astable mode. It can be used as doorbell. With some modifications, it can
be used to produce different sounds. Read this post for complete details.
• Digital Voltmeter Circuit using ICL7107: Here we designed a analog to digital converter
working as a digital voltmeter using a low power three and half digit A/D converter ICL7107
having internal 7 segment decoders, display drivers, a reference and a clock.
• Digital Temperature Sensor: The main principle of this circuit is to display the digital
temperature value. These are mainly used in environmental applications.
• Digital Stopwatch Circuit: This is a simple circuit that displays count from 0 to 59,
representing a 60 second time interval. It consists of a 555 timer to produce the clock pulses
and two counter ICs to carry out the counting operation.
• Basic Logic Gates Using NAND Gate: We are all well known that NOT, AND, OR are the
basic logic gates. Here we have shown how to design these basic logic gates using one of the
universal gates – NAND Gate.
Ballistics is the field of mechanics that worries with the starting, flight conduct and effect impacts of shots, particularly gone weapon weapons, for example, slugs, unguided bombs, rockets or something like that; the science or craft of structuring and quickening shots in order to accomplish an ideal execution.
A ballistic body is a free-moving body with energy which can be liable to powers, for example, the powers applied by pressurized gases from a weapon barrel or a pushing spout, typical power by rifling, and gravity and air haul during flight.
A ballistic rocket is a rocket that is guided distinctly during the generally concise beginning period of fueled flight and the direction is in this way represented by the laws of traditional mechanics; rather than (for instance) a voyage rocket which is efficiently guided in controlled flight like a fixed-wing air ship.Projectile launchers
Throwing is the starting of a shot by hand. Albeit some different creatures can toss, people are curiously great hurlers because of their high expertise and great planning abilities, and it is accepted this is a developed attribute. Proof of human tossing goes back 2 million years. The 90 mph tossing velocity found in numerous competitors far surpasses the speed at which chimpanzees can toss things, which is around 20 mph. This capacity mirrors the capacity of the human shoulder muscles and ligaments to store flexibility until it is expected to drive an item.
Sling
This document outlines experiments to be conducted in a VLSI design lab manual. The experiments include drawing layouts of resistive load inverters, CMOS inverters, CMOS NAND gates, CMOS NOR gates, CMOS half adders, and CMOS full adders using CMOS 0.12um technology in Microwind3. The layouts will then be simulated to analyze their transient characteristics and output waveforms. One experiment also involves comparing the transfer characteristics of CMOS, resistive load, and NMOS load inverters.
This document discusses classical encryption techniques including monoalphabetic substitution ciphers like the Caesar cipher and cryptanalysis using letter frequencies. It also covers the Playfair cipher, polyalphabetic ciphers like the Vigenère cipher, transposition ciphers, product ciphers combining multiple techniques, and steganography for hiding messages. Rotor machines like the Enigma machine are presented as early examples of complex ciphers before the development of modern cryptography.
Introduction
Cyber Crime
- What Is Cyber Crime?
- Types Of Cyber Crime
Cyber Security
- What Is Cyber Security?
- Top Seven Cyber Safety Actions
- Cyber Safety At Work & Hom
Python is a high-level, interpreted, interactive and object-oriented scripting language that is designed to be highly readable. It has fewer syntactical constructions than other languages and uses English keywords. Python can be processed at runtime without compiling, allows interactive use via a prompt, and supports object-oriented programming through encapsulating code in objects. Originally created by Guido van Rossum in the late 1980s and early 1990s, Python draws influence from many other languages and its source code is now available under the GNU GPL license.
Python supports four main numerical types - integers, long integers, floating point numbers, and complex numbers. It provides various functions for mathematical, random number, trigonometric operations and constants like pi and e. Numbers are immutable and created using literals or by assigning values. The del statement can delete single or multiple number references.
This document provides an overview of network programming in Python. It discusses how Python allows both low-level socket access for building clients and servers as well as higher-level access to application protocols like FTP and HTTP. It then describes socket programming concepts like domains, types, protocols and functions for binding, listening, accepting, connecting, sending and receiving data. Simple client and server code examples are provided to demonstrate creating a basic socket connection between two programs. Finally, commonly used Python network modules for protocols like HTTP, FTP, SMTP and more are listed.
- Python allows programmers to write multithreaded programs that run multiple threads concurrently for more efficient use of resources and responsiveness.
- The main benefits of multithreading over multiple processes are that threads within a process can share data and resources more easily. Threads also have less memory overhead than processes.
- There are two main modules for multithreading in Python - the older thread module and the newer threading module, which provides more powerful features and methods for managing threads.
1) A Python module allows you to organize related code into a logical group and makes the code easier to understand and use.
2) Modules are imported using import statements and can contain functions, classes, and variables that can then be accessed from other code.
3) The import process searches specific directories to locate the module file based on the module name and import path.
This document discusses loops in Python programming. It explains that loops allow blocks of code to be executed multiple times. There are three main types of loops in Python: while loops, for loops, and nested loops. While loops repeat as long as a condition is true, for loops execute a sequence of statements a specific number of times, and nested loops allow loops within other loops. The document also covers loop control statements like break, continue, and pass that change the normal execution order of loops.
More from Smt. Indira Gandhi College of Engineering, Navi Mumbai, Mumbai (20)
We have compiled the most important slides from each speaker's presentation. This year’s compilation, available for free, captures the key insights and contributions shared during the DfMAy 2024 conference.
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsVictor Morales
K8sGPT is a tool that analyzes and diagnoses Kubernetes clusters. This presentation was used to share the requirements and dependencies to deploy K8sGPT in a local environment.
6th International Conference on Machine Learning & Applications (CMLA 2024)ClaraZara1
6th International Conference on Machine Learning & Applications (CMLA 2024) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of on Machine Learning & Applications.
4. Video: 30 pictures per second
Each picture = 200,000 dots or pixels
8-bits to represent each primary color
For RGB = 28 x 28 x 28
Bits required for one second movie = 503316480 pixels
Two hour movie requires = 2 x 60 x 60 x 503316480
5.
6. Compression is a way to reduce the number of bits in a
frame but retaining its meaning.
Decreases space, time to transmit, and cost
Technique is to identify redundancy and to eliminate it
If a file contains only capital letters, we may encode all
the 26 alphabets using 5-bit numbers instead of 8-bit
ASCII code
If the file had n-characters, then the savings = (8n-5n)/8n
=> 37.5%
Introduction
8. Lossless Compression
In lossless data compression:-
o The integrity of the data is preserved.
o The original data and the data after compression and
decompression are exactly the same.
o No data loss.
o Redundant data is removed in compression and added
during decompression.
o Lossless compression methods are normally used
when we cannot afford to lose any data.
17. Now if we read these aloud it’s not
So weird
“Three apples, two pears, one banana, two oranges
and one apple”
.........And it saves SPACE
18. Now to translate into
computer terms...
A scan line contains a run of numbers...
55556987444425555611111988888222222222
...Using run-length Encoding
(4,5) (1,6) (1,9) (1,8) (1,7)
(4,4) (1,2) (4,5) (1,6) (5,1)
(1,9) (5,8) (9,2)
19. Run-length encoding (RLE) is a very simple
form of data compression in which runs of data
(that is, sequences in which the same data
value occurs in many consecutive data
elements) are stored as a single data value
and count, rather than as the original run
To Sum it up.....
In Wikipedia terms.....
20. Huffman Coding
Huffman coding is credited to David Albert Huffman
Huffman coding is an entropy encoding algorithm used
for lossless data compression.
Huffman coding is a method of storing strings of data as
binary code in efficient manner
Huffman coding uses variable length coding which
means that symbols in the data you are encoded are
converted in to a binary symbol based on how often that
symbol is used
There is a way to decide what binary code to give to each
character using trees
21. The (Real) Basic Algorithm
Scan text to be compressed and tally occurrence of all
characters.
Sort or prioritize characters based on number of
occurrences in text.
Build Huffman code tree based on prioritized list.
Perform a traversal of tree to determine all code words.
Scan text again and create new file using the Huffman
codes.
22. CS 102
Consider the following short text:
Eerie eyes seen near lake.
Count up the occurrences of all characters in the text
Building a Tree
Scan the original text
23. CS 102
Eerie eyes seen near lake.
What characters are present?
E e r i space
y s n a r l k .
Building a Tree
Scan the original text
24. CS 102
Eerie eyes seen near lake.
What is the frequency of each character in the
text?
Char Freq
E 1
e 8
r 2
i 1
Space 4
y 1
s 2
n 2
Char Freq
a 2
l 1
k 1
. 1
Building a Tree
Scan the original text
25. CS 102
The queue after inserting all nodes
Null Pointers are not shown
E
1
i
1
y
1
l
1
k
1
.
1
r
2
s
2
n
2
a
2
sp
4
e
8
Building a Tree
49. CS
102
Perform a traversal of the
tree to obtain new code
words
Going left is a 0 going right
is a 1
code word is only
completed when a leaf
node is reached
E
1
i
1
sp
4
e
8
2
y
1
l
1
2
k
1
.
1
2
r
2
s
2
4
n
2
a
2
4
4
6 8
10
16
26
Encoding the File
Traverse Tree for Codes
50. CS
102
ENCODING THE FILE
TRAVERSE TREE FOR CODES
Char Code
E 0000
i 0001
y 0010
l 0011
k 0100
. 0101
space 011
e 10
r 1100
s 1101
n 1110
a 1111
E
1
i
1
sp
4
e
8
2
y
1
l
1
2
k
1
.
1
2
r
2
s
2
4
n
2
a
2
4
4
6 8
10
16
26
51. CS
102
ENCODING THE FILE
Rescan text and encode file
using new code words
Eerie eyes seen near lake.
Char Code
E 0000
i 0001
y 0010
l 0011
k 0100
. 0101
space 011
e 10
r 1100
s 1101
n 1110
a 1111
0000101100000110011100010101101101
00111110101111110001100111111010010
0101
Why is there no need for a
separator character?
.
52. CS
102
ENCODING THE FILE
RESULTS
Have we made things any
better?
73 bits to encode the text
ASCII would take 8 * 26 =
208 bits
0000101100000110011100010101101101
00111110101111110001100111111010010
0101
53. Lemple Ziv (LZ) Encoding
Data compression up until the late 1970's mainly directed
towards creating better methodologies for Huffman coding.
An innovative, radically different method was introduced
in1977 by Abraham Lempel and Jacob Ziv.
This technique ( called Lempel-Ziv) actually consists of two
considerably different algorithms, LZ77 and LZ78.
Due to patents, LZ77 and LZ78 led to many variants.
LZH
LZB
LZSS
LZR
LZ77
Variants
LZFG
LZJ
LZMW
LZT
LZC
LZW
LZ78
Variants
The zip and unzip use the LZH technique while UNIX's
compress methods belong to the LZW and LZC classes
54. EXAMPLE : LZ78 COMPRESSION
Encode (i.e., compress) the string ABBCBCABABCAABCAAB using the LZ78 algorithm.
The compressed message is: (0,A)(0,B)(2,C)(3,A)(2,A)(4,A)(6,B)
Note: The above is just a representation, the commas and parentheses are not transmitted;
we will discuss the actual form of the compressed message later on in slide 12.
55. EXAMPLE : LZ78 COMPRESSION (CONT’D)
1. A is not in the Dictionary; insert it
2. B is not in the Dictionary; insert it
3. B is in the Dictionary.
BC is not in the Dictionary; insert it.
4. B is in the Dictionary.
BC is in the Dictionary.
BCA is not in the Dictionary; insert it.
5. B is in the Dictionary.
BA is not in the Dictionary; insert it.
6. B is in the Dictionary.
BC is in the Dictionary.
BCA is in the Dictionary.
BCAA is not in the Dictionary; insert it.
7. B is in the Dictionary.
BC is in the Dictionary.
BCA is in the Dictionary.
BCAA is in the Dictionary.
BCAAB is not in the Dictionary; insert it.
56. Lossy Compression Methods
Used for compressing images and video files
(our eyes cannot distinguish subtle changes, so
lossy data is acceptable).
These methods are cheaper, less time and
space.
Several methods:
JPEG: compress pictures and graphics
MPEG: compress video
MP3: compress audio
57. JPEG Encoding
Used to compress pictures and graphics.
In JPEG, a grayscale picture is divided into 8x8
pixel blocks to decrease the number of
calculations.
Basic idea:
Change the picture into a linear (vector) sets of numbers that
reveals the redundancies.
The redundancies is then removed by one of lossless
compression methods.
58. JPEG Encoding - DCT
DCT: Discrete Concise Transform
DCT transforms the 64 values in 8x8 pixel block
in a way that the relative relationships between
pixels are kept but the redundancies are
revealed.
Example:
A gradient grayscale
59. Quantization & Compression
Quantization:
After T table is created, the values are quantized to reduce the
number of bits needed for encoding.
Quantization divides the number of bits by a constant, then
drops the fraction. This is done to optimize the number of bits
and the number of 0s for each particular application.
• Compression:
Quantized values are read from the table and redundant 0s are
removed.
To cluster the 0s together, the table is read diagonally in an
zigzag fashion. The reason is if the table doesn’t have fine
changes, the bottom right corner of the table is all 0s.
JPEG usually uses lossless run-length encoding at the
compression phase.
61. MPEG Encoding
Used to compress video.
Basic idea:
Each video is a rapid sequence of a set of
frames. Each frame is a spatial combination
of pixels, or a picture.
Compressing video =
spatially compressing each frame
+
temporally compressing a set of
frames.
62. MPEG Encoding
• Spatial Compression
• Each frame is spatially compressed by JPEG.
• Temporal Compression
• Redundant frames are removed.
• For example, in a static scene in which someone is talking,
most frames are the same except for the segment around the
speaker’s lips, which changes from one frame to the next.
63. Audio Compression
Used for speech or music
Speech: compress a 64 kHz digitized signal
Music: compress a 1.411 MHz signal
Two categories of techniques:
Predictive encoding
Perceptual encoding
64. •Predictive Encoding
•Only the differences between samples are encoded, not
the whole sample values.
•Several standards: GSM (13 kbps), G.729 (8 kbps), and
G.723.3 (6.4 or 5.3 kbps)
•Perceptual Encoding: MP3
•CD-quality audio needs at least 1.411 Mbps and cannot
be sent over the Internet without compression.
•MP3 (MPEG audio layer 3) uses perceptual encoding
technique to compress audio.
Audio Encoding
65. Conclusion
Compression is used in all types of data
to save space and time. There are two
types of data compression-lossy and
lossless. Lossy techniques are used for
images, videos and audios, where we
can bear data loss. Lossless technique
is used for textual data it can be
encoded through run-length, Huffman
and Lempel Ziv.