Data compression reduces the size of a data file by identifying and eliminating statistical and perceptual redundancies. There are two main types: lossless compression, which reduces file size by encoding data more efficiently without loss of information, and lossy compression, which provides greater reduction by removing unnecessary data, resulting in information loss. Popular lossy audio and video compression formats like MP3, JPEG, and MPEG exploit patterns and limitations in human perception to greatly reduce file sizes for storage and transmission with minimal impact on quality. [/SUMMARY]
Computer Science (A Level) discusses data compression techniques. Compression reduces the number of bits required to represent data to save disk space and increase transfer speeds. There are two main types of compression: lossy compression, which permanently removes non-essential data and can reduce quality, and lossless compression, which identifies patterns to compress data without any loss. Common lossy techniques are JPEG, MPEG, and MP3, while common lossless techniques are run length encoding and dictionary encoding.
These slides cover the fundamentals of data communication & networking. it covers Data compression which compression data for transmitted over communication channel. It is useful for engineering students & also for the candidates who want to master data communication & computer networking.
Comparison between Lossy and Lossless Compressionrafikrokon
This presentation compares lossy and lossless compression. It discusses the group members, topics to be covered including definitions of compression, lossless compression, and lossy compression. It explains that lossless compression allows exact recovery of original data while lossy compression involves some data loss. Lossy compression removes non-essential data and has data degradation but is cheaper and requires less space and time. Lossless compression works well with repeated data and allows exact data recovery but requires more space and time. The presentation discusses uses of each compression type and their advantages and disadvantages.
This document discusses different types of audio file compression formats including lossy formats like MP3 and AAC that sacrifice quality for size, lossless formats like FLAC and ALAC that compress without quality loss, and uncompressed formats like AIFF and WAV. It provides details on popular lossy formats that remove imperceptible audio data, lossless formats that eliminate unnecessary data while retaining full quality, and uncompressed formats designed for archiving original recordings when storage space allows.
Comparison of various data compression techniques and it perfectly differentiates different techniques of data compression. Its likely to be precise and focused on techniques rather than the topic itself.
This document discusses fundamentals of lossy video compression. It introduces digital video and factors like frame rate, color resolution, and spatial resolution. It describes video compression standards including MPEG, JPEG, H.261, and H.263. MPEG standards like MPEG-1, MPEG-2, and MPEG-4 are discussed in detail regarding their applications and capabilities. The goals of video compression are to reduce file size while retaining quality for storing and transferring video efficiently.
This document discusses different compression techniques including lossless and lossy compression. Lossless compression recovers the exact original data after compression and is used for databases and documents. Lossy compression results in some loss of accuracy but allows for greater compression and is used for images and audio. Common lossless compression algorithms discussed include run-length encoding, Huffman coding, and arithmetic coding. Lossy compression is used in applications like digital cameras to increase storage capacity with minimal quality degradation.
In computer science and information theory, data compression, source coding,[1] or bit-rate reduction involves encoding information using fewer bits than the original representation.[2] Compression can be either lossy or lossless. Lossless compression reduces bits by identifying and eliminating statistical redundancy. No information is lost in lossless compression.
Computer Science (A Level) discusses data compression techniques. Compression reduces the number of bits required to represent data to save disk space and increase transfer speeds. There are two main types of compression: lossy compression, which permanently removes non-essential data and can reduce quality, and lossless compression, which identifies patterns to compress data without any loss. Common lossy techniques are JPEG, MPEG, and MP3, while common lossless techniques are run length encoding and dictionary encoding.
These slides cover the fundamentals of data communication & networking. it covers Data compression which compression data for transmitted over communication channel. It is useful for engineering students & also for the candidates who want to master data communication & computer networking.
Comparison between Lossy and Lossless Compressionrafikrokon
This presentation compares lossy and lossless compression. It discusses the group members, topics to be covered including definitions of compression, lossless compression, and lossy compression. It explains that lossless compression allows exact recovery of original data while lossy compression involves some data loss. Lossy compression removes non-essential data and has data degradation but is cheaper and requires less space and time. Lossless compression works well with repeated data and allows exact data recovery but requires more space and time. The presentation discusses uses of each compression type and their advantages and disadvantages.
This document discusses different types of audio file compression formats including lossy formats like MP3 and AAC that sacrifice quality for size, lossless formats like FLAC and ALAC that compress without quality loss, and uncompressed formats like AIFF and WAV. It provides details on popular lossy formats that remove imperceptible audio data, lossless formats that eliminate unnecessary data while retaining full quality, and uncompressed formats designed for archiving original recordings when storage space allows.
Comparison of various data compression techniques and it perfectly differentiates different techniques of data compression. Its likely to be precise and focused on techniques rather than the topic itself.
This document discusses fundamentals of lossy video compression. It introduces digital video and factors like frame rate, color resolution, and spatial resolution. It describes video compression standards including MPEG, JPEG, H.261, and H.263. MPEG standards like MPEG-1, MPEG-2, and MPEG-4 are discussed in detail regarding their applications and capabilities. The goals of video compression are to reduce file size while retaining quality for storing and transferring video efficiently.
This document discusses different compression techniques including lossless and lossy compression. Lossless compression recovers the exact original data after compression and is used for databases and documents. Lossy compression results in some loss of accuracy but allows for greater compression and is used for images and audio. Common lossless compression algorithms discussed include run-length encoding, Huffman coding, and arithmetic coding. Lossy compression is used in applications like digital cameras to increase storage capacity with minimal quality degradation.
In computer science and information theory, data compression, source coding,[1] or bit-rate reduction involves encoding information using fewer bits than the original representation.[2] Compression can be either lossy or lossless. Lossless compression reduces bits by identifying and eliminating statistical redundancy. No information is lost in lossless compression.
This document discusses multimedia data compression. It explains that multimedia files like images, audio, and video take up significantly more storage space than text files. Data compression reduces the size of files by removing redundant data, saving storage space and allowing faster transfers. Compression is achieved through codecs, which are hardware or software that compresses and decompresses data. Compression can be lossless, allowing exact reconstruction of the original data, or lossy, which sacrifices some quality to achieve greater compression but does not allow exact reconstruction.
The document discusses JPEG, a commonly used image file format for photographs. JPEG uses "lossy" compression to significantly reduce file sizes with minimal quality loss. It is well-suited for photos but can cause artifacts in other image types. The document also compares JPEG to other formats like PNG and GIF, and discusses vector versus raster graphics. MIDI is then explained as a protocol allowing electronic musical instruments to communicate and transmit musical data like notes, volume, and effects.
The document discusses several key topics related to digital images:
- Raster images are composed of pixels arranged in a grid, while vector images use mathematical descriptions of lines, curves and shapes. Raster images lose quality when scaled while vector images maintain quality.
- Resolution refers to the number of pixels per inch in a raster image, affecting quality of on-screen and printed display. Higher resolutions have more pixels and finer detail.
- Aspect ratio expresses the proportional relationship between an image's width and height, such as 16:9 for HDTV. Formats with unequal ratios require enlarging or adding borders for presentation.
- Common file formats include GIF for simple graphics, JPEG for photographs
The document discusses various topics relating to raster and vector images, including:
- Raster images are composed of pixels while vector images are composed of mathematical objects. Vector images can be scaled without quality loss.
- Common file formats for raster images include JPEG, TIFF, PNG, and GIF while common vector formats are EPS, AI, and PDF.
- Other topics covered include color models (RGB, CMYK), resolution, aspect ratio, and image editing software like Photoshop, Illustrator, and InDesign.
comparision of lossy and lossless image compression using various algorithmchezhiyan chezhiyan
This document compares lossy and lossless image compression using various algorithms. It discusses the need for image compression to reduce file sizes for storage and transmission. Lossy compression provides higher compression ratios but some loss of information, while lossless compression retains all information without loss. The document proposes comparing algorithms like Fractal image compression and LZW, analyzing parameters like SNR, PSNR, and MSE for formats like BMP, TIFF, PNG and JPEG. It provides details on how the LZW and Fractal compression algorithms work.
Raster graphics store images as a grid of pixels that can lose detail when enlarged. Vector graphics use mathematical equations to define shapes, keeping images smooth at any size. Lossy compression discards some image data, resulting in loss of pixels and detail, while lossless compression reduces file size by about 50% without data loss. File formats like BMP, PNG, GIF, TIFF, JPG, and PSD are used for different types of images and uses.
This document discusses various methods of data compression. It begins by defining compression as reducing the size of data while retaining its meaning. There are two main types of compression: lossless and lossy. Lossless compression allows for perfect reconstruction of the original data by removing redundant data. Common lossless methods include run-length encoding and Huffman coding. Lossy compression is used for images and video, and results in some loss of information. Popular lossy schemes are JPEG, MPEG, and MP3. The document then proceeds to describe several specific compression algorithms and their encoding and decoding processes.
This document discusses data compression techniques. It begins by defining data compression as encoding information in a file to take up less space. It then covers the need for compression to save storage and transmission time. The main types of compression discussed are lossless, which allows exact reconstruction of data, and lossy, which allows approximate reconstruction for better compression. Specific lossless techniques covered include Huffman coding, which assigns variable length codes based on frequency. Lossy techniques like JPEG are also discussed. The document concludes by listing applications of compression techniques in files, multimedia, and communication.
This white paper discusses various video compression techniques and standards. It explains that JPEG is used for still images while MPEG is used for video. The two main early standards were JPEG and MPEG-1. Later standards like MPEG-2, MPEG-4, and H.264 provided improved compression ratios and capabilities. Key techniques discussed include lossy compression, comparing adjacent frames to reduce redundant data, and balancing compression ratio with image quality and latency considerations for different applications like surveillance video.
Medical Video Compression has to be loss less to avoid the danger of diagnostic errors. presentation outlines an approach to improve the compression ratio of medical video sequence using HEVC
What is Video Compression?, Introduction of Video Compression. Motivation, Working Methodology of Video Compression., Example, Applications, Needs of Video Compression, Advantages & Disadvantages
The document discusses various topics related to digital images, including raster images, vector images, file formats, color models, and image editing software. Raster images represent images as a grid of pixels while vector images use geometric primitives. Common file formats include JPEG, TIFF, EPS, PSD, and PDF. Color models include RGB, CMYK, and HSV/HSL. Adobe Photoshop and Illustrator are widely used image editing programs.
The document discusses multimedia compression techniques. It notes that audio, image, and video files require large amounts of data, which bandwidth and storage limitations cannot accommodate for real-time transmission and playback. Compression reduces these file sizes through lossless and lossy techniques. Popular standards like JPEG and MPEG use combinations of techniques like the discrete cosine transform, quantization, predictive coding, and entropy encoding to achieve compression while maintaining quality.
Pixels are the tiny dots that make up images on computer displays. They are arranged in a grid and the number of pixels determines the image resolution. Vector graphics use mathematical formulas to define paths, lines and shapes rather than a pixel grid. Common file formats for raster images include BMP, PNG, GIF and JPEG, while vector formats include EPS and AI. File compression reduces file sizes by removing unnecessary image data. Digital asset management systems help organize large libraries of digital files and images.
This document provides an overview of various video compression techniques and standards. It discusses fundamentals of digital video including frame rate, color resolution, spatial resolution, and image quality. It describes different compression techniques like intraframe, interframe, and lossy vs lossless. Key video compression standards discussed include MPEG-1, MPEG-2, MPEG-4, H.261, H.263 and JPEG for still image compression. Factors that impact compression like compression ratio, bit rate control, and real-time vs non-real-time are also summarized.
This document summarizes various topics related to image processing including image data types, file formats, acquisition, storage, processing, communication, display, and enhancement techniques. It discusses key concepts such as image fundamentals, color models, resolution, bit depth, file formats like JPEG, GIF, TIFF, compression techniques including lossless, lossy, intraframe, interframe, and algorithms like run length encoding and Shannon-Fano coding. Image enhancement topics covered are point processing, spatial filtering, and color image processing.
This document provides an overview of data compression techniques. It discusses lossless compression algorithms like Huffman encoding and LZW encoding which allow for exact reconstruction of the original data. It also discusses lossy compression techniques like JPEG and MPEG which allow for approximate reconstruction for images and video in order to achieve higher compression rates. JPEG divides images into 8x8 blocks and applies discrete cosine transform, quantization, and run length encoding. MPEG spatially compresses each video frame using JPEG and temporally compresses frames by removing redundant frames.
This document discusses image, audio, and video compression. It explains that raw multimedia data contains redundant information and compression removes this redundancy to reduce file size. There are lossless compression techniques like run-length encoding and LZW that allow for exact reconstruction of the original data. There is also lossy compression like JPEG that permanently eliminates some information, trading off quality for smaller file size. Lossy compression is generally used for audio and video.
The document discusses various topics related to data representation including:
1) How data is represented depends on the environment and each has its own rules and standards.
2) Popular network data representations include ASN.1 and XDR, with ASN.1 being a more robust standard that describes data structures unambiguously while XDR is simpler.
3) ASN.1 is commonly used for applications such as network management, secure email, and telephony due to its ability to exchange structured data across networks and platforms.
Digital data compression reduces transmission bandwidth requirements by removing redundant data. There are two types: lossy compression which permanently removes some data, and lossless compression which does not. Common lossy compression standards are JPEG for images and MP3 for audio, while ZIP files use lossless compression. The limits of lossy compression are determined by information theory concepts like source entropy.
This document provides an overview of data compression techniques. It discusses how data compression reduces the number of bits needed to represent data, saving storage space and transmission bandwidth. It describes lossy compression methods like JPEG and MPEG that eliminate redundant information, resulting in smaller file sizes but some loss of data quality. Lossless compression methods like ZIP and GIF are also covered, which compress data without any loss for file types like text where quality is important. Specific lossless compression techniques like run length encoding, Huffman coding, Lempel-Ziv coding are explained. The document concludes with a brief mention of image, video, audio and dictionary based compression methods.
This document provides an overview of data compression techniques. It discusses how data compression reduces the number of bits needed to represent data, saving storage space and transmission bandwidth. It describes lossy compression methods like JPEG and MPEG that eliminate redundant information, resulting in smaller file sizes but some loss of data quality. Lossless compression methods like ZIP and GIF are also covered, which compress data without any loss for file types like text where quality is important. Specific lossless compression techniques like run length encoding, Huffman coding, Lempel-Ziv coding are explained. The document concludes with a brief mention of image, video, audio and dictionary based compression methods.
This document discusses multimedia data compression. It explains that multimedia files like images, audio, and video take up significantly more storage space than text files. Data compression reduces the size of files by removing redundant data, saving storage space and allowing faster transfers. Compression is achieved through codecs, which are hardware or software that compresses and decompresses data. Compression can be lossless, allowing exact reconstruction of the original data, or lossy, which sacrifices some quality to achieve greater compression but does not allow exact reconstruction.
The document discusses JPEG, a commonly used image file format for photographs. JPEG uses "lossy" compression to significantly reduce file sizes with minimal quality loss. It is well-suited for photos but can cause artifacts in other image types. The document also compares JPEG to other formats like PNG and GIF, and discusses vector versus raster graphics. MIDI is then explained as a protocol allowing electronic musical instruments to communicate and transmit musical data like notes, volume, and effects.
The document discusses several key topics related to digital images:
- Raster images are composed of pixels arranged in a grid, while vector images use mathematical descriptions of lines, curves and shapes. Raster images lose quality when scaled while vector images maintain quality.
- Resolution refers to the number of pixels per inch in a raster image, affecting quality of on-screen and printed display. Higher resolutions have more pixels and finer detail.
- Aspect ratio expresses the proportional relationship between an image's width and height, such as 16:9 for HDTV. Formats with unequal ratios require enlarging or adding borders for presentation.
- Common file formats include GIF for simple graphics, JPEG for photographs
The document discusses various topics relating to raster and vector images, including:
- Raster images are composed of pixels while vector images are composed of mathematical objects. Vector images can be scaled without quality loss.
- Common file formats for raster images include JPEG, TIFF, PNG, and GIF while common vector formats are EPS, AI, and PDF.
- Other topics covered include color models (RGB, CMYK), resolution, aspect ratio, and image editing software like Photoshop, Illustrator, and InDesign.
comparision of lossy and lossless image compression using various algorithmchezhiyan chezhiyan
This document compares lossy and lossless image compression using various algorithms. It discusses the need for image compression to reduce file sizes for storage and transmission. Lossy compression provides higher compression ratios but some loss of information, while lossless compression retains all information without loss. The document proposes comparing algorithms like Fractal image compression and LZW, analyzing parameters like SNR, PSNR, and MSE for formats like BMP, TIFF, PNG and JPEG. It provides details on how the LZW and Fractal compression algorithms work.
Raster graphics store images as a grid of pixels that can lose detail when enlarged. Vector graphics use mathematical equations to define shapes, keeping images smooth at any size. Lossy compression discards some image data, resulting in loss of pixels and detail, while lossless compression reduces file size by about 50% without data loss. File formats like BMP, PNG, GIF, TIFF, JPG, and PSD are used for different types of images and uses.
This document discusses various methods of data compression. It begins by defining compression as reducing the size of data while retaining its meaning. There are two main types of compression: lossless and lossy. Lossless compression allows for perfect reconstruction of the original data by removing redundant data. Common lossless methods include run-length encoding and Huffman coding. Lossy compression is used for images and video, and results in some loss of information. Popular lossy schemes are JPEG, MPEG, and MP3. The document then proceeds to describe several specific compression algorithms and their encoding and decoding processes.
This document discusses data compression techniques. It begins by defining data compression as encoding information in a file to take up less space. It then covers the need for compression to save storage and transmission time. The main types of compression discussed are lossless, which allows exact reconstruction of data, and lossy, which allows approximate reconstruction for better compression. Specific lossless techniques covered include Huffman coding, which assigns variable length codes based on frequency. Lossy techniques like JPEG are also discussed. The document concludes by listing applications of compression techniques in files, multimedia, and communication.
This white paper discusses various video compression techniques and standards. It explains that JPEG is used for still images while MPEG is used for video. The two main early standards were JPEG and MPEG-1. Later standards like MPEG-2, MPEG-4, and H.264 provided improved compression ratios and capabilities. Key techniques discussed include lossy compression, comparing adjacent frames to reduce redundant data, and balancing compression ratio with image quality and latency considerations for different applications like surveillance video.
Medical Video Compression has to be loss less to avoid the danger of diagnostic errors. presentation outlines an approach to improve the compression ratio of medical video sequence using HEVC
What is Video Compression?, Introduction of Video Compression. Motivation, Working Methodology of Video Compression., Example, Applications, Needs of Video Compression, Advantages & Disadvantages
The document discusses various topics related to digital images, including raster images, vector images, file formats, color models, and image editing software. Raster images represent images as a grid of pixels while vector images use geometric primitives. Common file formats include JPEG, TIFF, EPS, PSD, and PDF. Color models include RGB, CMYK, and HSV/HSL. Adobe Photoshop and Illustrator are widely used image editing programs.
The document discusses multimedia compression techniques. It notes that audio, image, and video files require large amounts of data, which bandwidth and storage limitations cannot accommodate for real-time transmission and playback. Compression reduces these file sizes through lossless and lossy techniques. Popular standards like JPEG and MPEG use combinations of techniques like the discrete cosine transform, quantization, predictive coding, and entropy encoding to achieve compression while maintaining quality.
Pixels are the tiny dots that make up images on computer displays. They are arranged in a grid and the number of pixels determines the image resolution. Vector graphics use mathematical formulas to define paths, lines and shapes rather than a pixel grid. Common file formats for raster images include BMP, PNG, GIF and JPEG, while vector formats include EPS and AI. File compression reduces file sizes by removing unnecessary image data. Digital asset management systems help organize large libraries of digital files and images.
This document provides an overview of various video compression techniques and standards. It discusses fundamentals of digital video including frame rate, color resolution, spatial resolution, and image quality. It describes different compression techniques like intraframe, interframe, and lossy vs lossless. Key video compression standards discussed include MPEG-1, MPEG-2, MPEG-4, H.261, H.263 and JPEG for still image compression. Factors that impact compression like compression ratio, bit rate control, and real-time vs non-real-time are also summarized.
This document summarizes various topics related to image processing including image data types, file formats, acquisition, storage, processing, communication, display, and enhancement techniques. It discusses key concepts such as image fundamentals, color models, resolution, bit depth, file formats like JPEG, GIF, TIFF, compression techniques including lossless, lossy, intraframe, interframe, and algorithms like run length encoding and Shannon-Fano coding. Image enhancement topics covered are point processing, spatial filtering, and color image processing.
This document provides an overview of data compression techniques. It discusses lossless compression algorithms like Huffman encoding and LZW encoding which allow for exact reconstruction of the original data. It also discusses lossy compression techniques like JPEG and MPEG which allow for approximate reconstruction for images and video in order to achieve higher compression rates. JPEG divides images into 8x8 blocks and applies discrete cosine transform, quantization, and run length encoding. MPEG spatially compresses each video frame using JPEG and temporally compresses frames by removing redundant frames.
This document discusses image, audio, and video compression. It explains that raw multimedia data contains redundant information and compression removes this redundancy to reduce file size. There are lossless compression techniques like run-length encoding and LZW that allow for exact reconstruction of the original data. There is also lossy compression like JPEG that permanently eliminates some information, trading off quality for smaller file size. Lossy compression is generally used for audio and video.
The document discusses various topics related to data representation including:
1) How data is represented depends on the environment and each has its own rules and standards.
2) Popular network data representations include ASN.1 and XDR, with ASN.1 being a more robust standard that describes data structures unambiguously while XDR is simpler.
3) ASN.1 is commonly used for applications such as network management, secure email, and telephony due to its ability to exchange structured data across networks and platforms.
Digital data compression reduces transmission bandwidth requirements by removing redundant data. There are two types: lossy compression which permanently removes some data, and lossless compression which does not. Common lossy compression standards are JPEG for images and MP3 for audio, while ZIP files use lossless compression. The limits of lossy compression are determined by information theory concepts like source entropy.
This document provides an overview of data compression techniques. It discusses how data compression reduces the number of bits needed to represent data, saving storage space and transmission bandwidth. It describes lossy compression methods like JPEG and MPEG that eliminate redundant information, resulting in smaller file sizes but some loss of data quality. Lossless compression methods like ZIP and GIF are also covered, which compress data without any loss for file types like text where quality is important. Specific lossless compression techniques like run length encoding, Huffman coding, Lempel-Ziv coding are explained. The document concludes with a brief mention of image, video, audio and dictionary based compression methods.
This document provides an overview of data compression techniques. It discusses how data compression reduces the number of bits needed to represent data, saving storage space and transmission bandwidth. It describes lossy compression methods like JPEG and MPEG that eliminate redundant information, resulting in smaller file sizes but some loss of data quality. Lossless compression methods like ZIP and GIF are also covered, which compress data without any loss for file types like text where quality is important. Specific lossless compression techniques like run length encoding, Huffman coding, Lempel-Ziv coding are explained. The document concludes with a brief mention of image, video, audio and dictionary based compression methods.
This document provides an overview of image compression techniques. It discusses how image compression works to reduce the number of bits needed to represent image data. The main goals of image compression are to reduce irrelevant and redundant image information to produce smaller and more efficient file sizes for storage and transmission. The document outlines different compression methods including lossless compression, which compresses data without any loss, and lossy compression, which allows for some loss of information in exchange for higher compression ratios. Specific techniques like run length encoding are also explained.
This document provides an overview of image compression. It discusses what image compression is, why it is needed, common terminology used, entropy, compression system models, and algorithms for image compression including lossless and lossy techniques. Lossless algorithms compress data without any loss of information while lossy algorithms reduce file size by losing some information and quality. Common lossless techniques mentioned are run length encoding and Huffman coding while lossy methods aim to form a close perceptual approximation of the original image.
This document summarizes image processing techniques in the Android environment. It discusses compressed, saturated, and cropped images created using mathematical operations and techniques like JPEG lossy compression. It also covers adding text to images and adjusting brightness/contrast. Key aspects of image processing like compression, techniques like JPEG, and performance metrics for evaluating Android apps like CPU usage, memory usage, and GPU are overviewed.
Types of Data compression, Lossy Compression, Lossless compression and many more. How data is compressed etc. A little extensive than CIE O level Syllabus
the compression of images is an important step before we start the processing of larger images or videos. The compression of images is carried out by an encoder and output a compressed form of an image. In the processes of compression, the mathematical transforms play a vital role.
The presentation layer is responsible for data representation, compression, encryption and formatting for transmission between applications. It encodes application data into messages and decodes received messages. Common data representations include ASN.1 and XDR. Lossy and lossless compression techniques are used to reduce file sizes. Encryption transforms plaintext into ciphertext using keys to protect confidentiality during transmission.
Task 1 – digital graphics for computer gamesJames-003
This document discusses digital graphics for computer games. It covers pixels and resolution, common file formats like BMP, PNG, GIF and their uses. It also discusses compression techniques, image capture devices, optimizing computer performance at different levels, and storage and asset management for game development.
Pixel resolution refers to the number of pixels in an image but is technically not a true resolution. Standards specify it should not be referred to as resolution. Pixel resolution is commonly described as the number of pixel columns by rows, such as 7680 by 4320. It can also be cited as total megapixels or pixels per unit. True resolution refers to image detail or lines per unit, which pixels provide an upper bound for. The number of effective pixels in an image is those that contribute to the final image, not including unused edge pixels.
Pixel resolution determines the number of distinct pixels that can be displayed. Vector graphics use geometrical primitives based on mathematical expressions, while raster graphics represent images as a grid of pixels.
File formats specify how bits are used to encode image information. Common formats include GIF for images with a limited color palette, JPEG for photographic images using lossy compression, TIFF for bitmaps, EPS for vector graphics and bitmaps, BMP for Windows graphics, PNG for truecolor images, and PSD for layered Photoshop files.
Data compression reduces file size by eliminating statistical redundancy (lossless) or removing marginally important information (lossy). Image capture devices like digital cameras and scanners convert images to digital formats,
Data compression reduces the size of data files by removing redundant information while preserving the essential content. It aims to reduce storage space and transmission times. There are two main types of compression: lossless, which preserves all original data, and lossy, which sacrifices some quality for higher compression ratios. Common lossless methods are run-length encoding, Huffman coding, and Lempel-Ziv encoding, while lossy methods include JPEG, MPEG, and MP3.
This document provides an overview of lossless data compression techniques. It discusses Huffman coding, Shannon-Fano coding, and Run Length Encoding as common lossless compression algorithms. Huffman coding assigns variable length binary codes to symbols based on their frequency, with more common symbols getting shorter codes. Shannon-Fano coding similarly generates a binary tree to assign codes but aims for a roughly equal probability between left and right subtrees. Run Length Encoding replaces repeated sequences with the length of the run and the symbol. The document contrasts lossless techniques that preserve all data with lossy techniques used for media that can tolerate some loss of information.
Computer displays are made up of grids of small rectangular pixels that together form images. The smaller and closer together the pixels are, the higher the display's resolution and image quality. However, higher resolution requires larger file sizes to store more pixel data. Common file formats for raster graphics include BMP, PNG, GIF, TIFF and JPEG, which use different types and levels of compression to reduce file sizes. Compression can decrease file sizes but may also lower image quality or slow opening times if decompression is required. Optimization aims to improve how efficiently systems use resources like processing time, memory and power.
There are two categories of data compression methods: lossless and lossy. Lossless methods preserve the integrity of the data by using compression and decompression algorithms that are exact inverses, while lossy methods allow for data loss. Common lossless methods include run-length encoding and Huffman coding, while lossy methods like JPEG, MPEG, and MP3 are used to compress images, video, and audio by removing imperceptible or redundant data.
This document discusses data representation, data compression, and encryption. It begins by defining data representation and describing how computers store different types of data like numbers, text, graphics, and sound. It then discusses several data representation methods and formats like ASCII, EBCDIC, and ASN.1. The document also covers data compression techniques including lossless and lossy compression. Common audio, video, and image compression formats and their properties are described. Finally, the document provides an overview of encryption, describing the encryption and decryption process and some basic encryption concepts and techniques.
Lossy compression removes invisible information from audio and video files to reduce file sizes, resulting in some loss of quality but still retaining the essential aspects. Lossless compression reduces file sizes by removing redundant data but does not degrade quality as it allows exact reconstruction. Common lossy audio and video formats include MP3, JPEG, and MPEG-4, while lossless formats include FLAC, PNG, and ZIP.
This is the subject slides for the module MMS2401 - Multimedia System and Communication taught in Shepherd College of Media Technology, Affiliated with Purbanchal University.
An image in its original form contains large amount of redundant data which consumes huge
amount of memory and can also create storage and transmission problem. The rapid growth in the field of
multimedia and digital images also needs more storage and more bandwidth while data transmission. By
reducing redundant bits within the image data the size of image can also be reduced without affecting
essential data. In this paper we are representing existing lossless image compression techniques. The
image quality will also be discussed on the basis of certain performance parameters such as compression
ratio, peak signal to noise ratio, root mean square.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
How to Get CNIC Information System with Paksim Ga.pptx
Pbl1
1.
2.
Data compression (bit-rate reduction) involves encoding
information using fewer bits than the original
representation.
The process of reducing the size of a data file is popularly
referred to as data compression, although its formal name is
source coding (coding done at the source of the data before
it is stored or transmitted).[
Compression can be either lossy or lossless.
Lossless compression reduces bits by identifying and
eliminating statistical redundancy. No information is lost in
lossless compression.
Lossy compression reduces bits by identifying unnecessary
information and removing it.
3.
The lossless data compression method essentially has two steps: Analyze the files
and then eliminate the redundant data found within them.
For example, if a file compressor analyzed and eliminated all the repeated words in
a document file, the result would be a document with about 60 percent fewer
words. Such is the case with compressed files. Your application analyzes the file
and removes all the equivalent superfluous data bits, and shrinks the overall size of
the file.
However, if you attempted to read the article with the omitted words, it wouldn't
make any sense. Therefore, the file compression applications insert placeholders
where those eliminated words were.
When you extract the file, the application automatically restores the repeated words
to their places, making the file readable. Because no data is lost, this method is
called lossless compression.
4.
Lossy data compression is the converse of lossless data compression. In these
schemes, some loss of information is acceptable. Dropping nonessential detail from
the data source can save storage space.
Lossy data compression schemes are informed by research on how people perceive
the data in question.
For example, the human eye is more sensitive to subtle variations inluminance than
it is to variations in color.
JPEG image compression works in part by rounding off nonessential bits of
information. There is a corresponding trade-off between information lost and the
size reduction.
A number of popular compression formats exploit these perceptual differences,
including those used in music files, images, and video.
5.
Data compression works by finding patterns in data that occur frequently,
and changing their representation to something short, so that the total
amount of data is reduced without sacrificing any useful information.
For example, suppose you have a stream of data that consists of only ones
and zeros, like this: 10100001010101001. And suppose that you know that
this data stream usually contains a lot more zeros than ones; that is, a
stream is more likely to be 100000100000101000000 than
111110111011011111.
In this case, you can develop a way of abbreviating the zeros so that they
take up less space. You can define A as representing a one, B as
representing a single zero, and C as representing four consecutive zeros.
Now suppose you have a data stream like this:
100000100100000000000001010010000
6.
After you encode it, it will look like ACBABBACCCBABABBAC. Notice that
this is shorter than the original, because your encoding method helped
abbreviate long strings of consecutive zeros. This is data compression.
In order for data compression to work, the data stream must not be random.
There has to be some sort of pattern in it, or you can't compress it. For
example, if the stream contains ones and zeros, but there's no pattern, and
neither ones nor zeroes are more common, then you can't compress the data
stream, because there's nothing predictable about it.
If you want a more formal definition, data compression consists of a way of
encoding a set of input messages into a set of output messages such that the
most common input messages encode to the shortest output messages, and the
least common input messages encode to the longest output messages. As long
as the input messages are not randomly distributed, this will result in an output
stream that is shorter than the input stream. It's all information theory.
7.
The objective of image compression is to
reduce irrelevance and redundancy of the
image data in order to be able to store
or transmit data in an efficient form.
Image compression may be lossy or lossless.
8.
Lossless compression is preferred for archival purposes and
often for medical imaging, technical drawings, clip art, or
comics.
Lossless compression is possible because most real-world data
has statistical redundancy.
For example, an image may have areas of colour that do not
change over several pixels; instead of coding "red pixel, red
pixel, ..." the data may be encoded as "279 red pixels". This is
a basic example of run-length encoding; there are many
schemes to reduce file size by eliminating redundancy.
9. Methods for lossless image compression are:
Run-length encoding – used as default method
in PCX and as one of possible
in BMP, TGA, TIFF
DPCM and Predictive Coding
Entropy encoding
Adaptive dictionary algorithms such as LZW –
used in GIF and TIFF
Deflation – used in PNG, MNG, and TIFF
Chain codes
10.
Lossy compression methods, especially when used at low bit
rates, introduce compression artifacts.
Lossy methods are especially suitable for natural images such
as photographs in applications where minor (sometimes
imperceptible) loss of fidelity is acceptable to achieve a
substantial reduction in bit rate. The lossy compression that
produces imperceptible differences may be called visually
lossless.
Lossy image compression can be used in digital cameras, to
increase storage capacities with minimal degradation of
picture quality. Similarly, DVDs use the lossy MPEG-2 Video
codec for video compression.
11.
Methods for lossy compression:
Reducing the color space to the most common colors in the image. The
selected colors are specified in the color palette in the header of the
compressed image. Each pixel just references the index of a color in the
color palette, this method can be combined with dithering to
avoid posterization.
In contrast to lossless compression, which retains the integrity of the
original file, the so-called lossy data compression method scans the file
being compressed to determine what information the file can do without. It
then eliminates those bits completely, with no method to retrieve that data.
This method is akin to taking picture with your camera phone, opening a
photo-editing app, cropping off the edges of the picture and then sending it
to a friend. The recipient of that message cannot restore the pixels you
cropped off before you sent the image. Such is the case with lossy
compression. While this is method is more effective at reducing the size of
the file, you won't be able to restore the file to its original state when you
extract the file on the back end of the process.
12.
What is the so-called image compression
coding?
To store the image into bit-stream as compact
as possible and to display the decoded image in
the monitor as exact as possible.
13.
The image file is converted into a series of
binary data, which is called the bit-stream.
The decoded receives the encoded bitstream and decoded it to reconstruct the
image.
The total data quantity of the bit-stream is
less than the total data quantity of the
original image.
14.
15.
16. GIF – Graphics Interchange Format
Compressed but do not lose any of the
original data (loseless)
Limited to 256 colors
Still patented in a few countries
PNG – Portable Network Graphics
Up to 48 bits worth of color
New graphic format
17.
JPEG: Joint Photographic Experts Group – an
international standard since 1992.
Compresses the data but can lose some of
the original content (lossy).
Contains millions of colors.
Works with colour and greyscale images.
Up to 24 bit colour images (Unlike GIF)
Target photographic quality images (Unlike
GIF)
Suitable for many applications e.g.,satellite,
medical, general photography.
18.
19.
20.
“Audio compression is a way to reduce the
size of the audio file.”
A form of data compression designed to
reduce the size of audio files
Audio compression can be lossless or lossy
Audio compression algorithms are typically
referred to as audio codecs.
21. 2 types of Audio Compression
Lossless - allows one to preserve an exact copy
of one's audio files
Usage: For archival purposes, editing, audio
quality.
Lossy - irreversible changes , achieves far greater
compression, use psychoacoustics to recognize
that not all data in an audio stream can be
perceived by the human auditory system.
Usage: distribution of streaming audio, or
interactive applications
22.
Codecs:
Lossless
Lossy
Free Lossless Audio Codec
(FLAC)
MP2- MPEG-1Layer 2 audio
codec
Apple Lossless
MP3 – MPEG-1 Layer 3 audio
codec
MPEG-4 ALS
MPC Musepack
Monkey's Audio
Vorbis Ogg Vorbis
Lossless Predictive Audio
Compression (LPAC)
AAC Advanced Audio Coding
(MPEG-2 and MPEG-4)
Lossless Transform Audio
Compression (LTAC)
WMA Windows Media Audio
AC3 AC-3 or Dolby Digital
A/52
23.
Motion Picture Experts Group
An ISO standard for high-fidelity audio
compression.
An ISO/IEC working group, established in
1988 to develop standards for digital audio
and video formats.
24.
MPEG-1
-
Designed for up to 1.5 Mbit/sec.
Is used to compress video and is designed for
specially for Video CD (VCD).
MPEG-2
-
-
Designed for between 1.5 and 15 Mbit/sec.
Similar to MPEG-1, but it can be used for more
applications.
Transmission rates are more than double the
transmission for MPEG-1.
Works with HDTV and DVD.
25.
-
-
-
MPEG-4
Designed specially for the Internet.
Provides greater audio and video interactivity than
previous MPEG versions.
It allows developers to control objects
independently in a scene.
MPEG-4 includes the capability of representing
natural and synthesized sound and also support
natural textures, images, photograph, natural video
and animated video.
27.
-
-
MP3
The name of the file extension and also the
name of the type of file for MPEG.
A popular audio file that can be opened in
Windows Media Player and many other players.
WAV
WAV files are a format for sound files
developed by Microsoft with a .wav file
extension.
28.
-
-
Ogg
Is an audio compression format, comparable
to other formats used to store and play
digital music, but differs in that it is free,
open and unpatented.
It uses Vorbis, a specific audio compression
scheme that's designed to be contained in
Ogg.
29.
-
-
WMA
Short for Windows Media Audio
WMA is a Microsoft file format for encoding
digital audio files similar to MP3 though can
compress files at a higher rate than MP3.
WMA files, which use the ".wma" file
extension, can be of any size compressed to
match many different connection speeds, or
bandwidths.
30.
31.
Once a video signal is digital, it requires a large amount
of storage space and transmission bandwidth.
To reduce the amount of data, several strategies are
employed that compress the information without
negatively affecting the quality of the image.
Storing and transmitting uncompressed raw video is not
an efficient technique because it needs large amounts of
storage and bandwidth.
Digital Versatile Disk (DVD), DSS, and internet video,
all use digital data because it take a lot of space to store
and large bandwidth to transmit
32.
Video compression technique is used to compress the data for
these applications because it less storage space and less
bandwidth to transmit data.
With efficient compression techniques, a significant reduction in
file size can be achieved with little or no adverse effect on the
visual quality. The video quality can be affected if the file size is
further lowered by raising the compression level for a given
compression technique.
Videos are sequences of images displayed at a high rate. Each of
these images is called a frame.
Human eye can not notice small changes in the frames such as a
slight difference in color.
Typically 30 frames are displayed on the screen every second.
33.
video compression standards do not require the
encoding of all the details and some of the less
important video details are lost because lossy
compression is used due to its ability to get very high
compression ratios.
less efficient during sequences of fast movement
because fewer MBs in the same position from frame to
frame. In fact, users may note video artifacts during
these sequences if the file is over compressed.
34.
To accomplish this, an application known as a “codec”
analyzes the video frame by frame, and breaks each frame
down into square blocks known as “macro blocks.”
One macro block(MB) consists of four pixels. Typically, the
codec then analyzes each frame, checking for changes in
the MBs.
Areas where the MBs do not change for several frames in a
row are noted and further analyzed.
If the video compression codec determines that these areas
can be removed from some of the frames, it does so, thus
reducing overall file size.
35.
36.
Intra frame ( I )
-Typically about 12 frames between 1 frame
-every MB of the frame is coded using spatial redundancy
Predictive frame ( P )
-Encode from previous I or P reference frame
-most of the MBs of the frame are coded exploiting
temporal redundancy in the past
Bi-directional frames ( B )
-Encode from previous and future I or P frames
-most of the MBs of the frame are coded exploiting
temporal redundancy in the past and in the future
37.
38. Lossy
Lossy compression reduce file size by considerably graeter
amount than lossless compression but lose both information
and quantity.
The compressed file has less data in it than the original file.
It can lose a relatively large amount of data before you start
to notice a difference.
Lossy compression makes up for the loss in quality by
producing comparatively small files.
For example, DVDs are compressed using the MPEG2format, which can make files 15 to 30 times smaller, but
we still tend to perceive DVDs as having high-quality
picture.
39. Lossless
Lossless compression is exactly what it sounds like,
compression where none of the information is lost.
produces a less compressed file, but maintains the original
quality.
reducing the file size by encoding image information more
efficiently.
If file size is not an issue, using lossless compression will
result in a perfect-quality picture.
For example, a video editor transferring files from one
computer to another using a hard drive might choose to use
lossless compression to preserve quality while he or she is
working.
40.
Start by encoding the first frame using a still image
compression method.
It should then encode each successive frame by
identifying the differences between the frame and its
predecessor, and encoding these differences. If the
frame is very different from its predecessor it should be
coded independently of any other frame.
41. Intraframe
Intra frame compression is a brute-force method that
often requires significantly more CPU time than inter
frame, but it can achieve a better balance between file
size and quality loss.
occurs within individual frames
designed to minimize the duplication of data in each
picture(Spatial Redundancy)
42. Interframe
Inter frame video compression considers frames one at
a time, seeing them only as still images. It can analyze
brightness and color and search for areas that can be
optimized, but it does not consider macro blocks.
compression between frames
designed to minimize data redundancy in successive
pictures(Temporal redundancy)
43.
Flow Control and Buffering
Temporal Compression
-Adjacent frames highly
Spatial Compression
-Nearby pixels often correlated(as in still images)
Discrete Cosine Transform (DCT)
Vector Quantization (VQ)
Fractal Compression
Discrete Wavelet Transform (DWT).
44. Example:
AVI: Audio Video Interleave
-use to store audio and video data in file
-formatted as .AVI
JPEG2000: Compression standard for still image
-Lower latency
-Type of lossless compression
MPEG2 & MPEG4: Video Compression Standard
-widely used to DVD Discs and digital television broad
casting
-used in as encoder before transmission
45.
The ISO/IEC, or International Organization for Standardization and the
International Electrotechnical Commission, have a group called the Moving
Pictures Experts Group or MPEG. MPEG is responsible, for example, for the
familiar compression formats MPEG-1, MPEG-2 and MPEG-4.
The ITU-T standardizes formats for the International Telecommunications
Union, a United Nations Organization. Some popular ITU-T compression
formats include the H.261 and H.264 formats.
There are other compression formats, such as Intel Indeo and RealVideo (based
on the ITU-T H.263 codec). These are just as useful as the ones standardized
by the international groups, although some video sharing websites won’t accept
them.
There are also a few different formats to consider when exporting for the web:
MPEG4 (which includes .MV4 files), MPEG2, H.264, DivX, Quicktime,
Window Media Video(WMV), etc.
It’s important not to get video compression formats mixed up with media
container formats. A media container is a file format that contains data that had
been compressed using a video compression format. So the media container is
the end product of video.
46.
Step 1: Add Video File
Click the +FILE button in the upper left of the program
interface. Choose the video you want to convert in the Add
File dialog box and press Open.
47.
Step 2: Choose the Format or Device Preset
Choose the desired video format or target mobile device from the list of
presets. You can also use the Search function to quickly find the format or
device you need. Next, choose the output folder for the compressed videos by
clicking Browse and selecting the desired destination. By default, the output
video will be saved in C:Users%your username%VideosMovavi Library.
48.
Step 3: Define Quality and Size Values
Return to the source file list and click on the value displayed in
the Quality/Size column. A dialog box will open. Move the slider bar to adjust the
output file size and bitrate to meet your needs. Note that the output video size value is
only an estimate; the actual size of the converted video file may differ slightly Check out
our detailed article for other ways to reduce video size.
49. Step 4: Start the Video Compression
Press the Convert button to start the compression process. After the operation is
complete, the output folder with the converted video will open automatically.
51. Abstract Syntax Notation One (ASN.1) is a standard and notation that
describes rules and structures for representing, encoding, transmitting,
and decoding data in telecommunications and computer networking.
52. The notation provides a certain number of pre-defined
basic types such as:
integers (INTEGER),
booleans (BOOLEAN),
character strings (IA5String, UniversalString...),
bit strings (BIT STRING),
etc.,
and makes it possible to define constructed types such
as:
structures (SEQUENCE),
lists (SEQUENCE OF),
choice between types (CHOICE),
etc.
53.
ASN.1 sends information in any form anywhere it needs to be
communicated digitally. ASN.1 only covers the structural aspects of
information there are no operators to handle the values once these are
defined or to make calculations with. Therefore it is not a programming
language.
One of the main reasons for the success of ASN.1 is that this notation is
associated with several standardized encoding rules such as the BER
(Basic Encoding Rules), or more recently the PER (Packed Encoding
Rules), which prove useful for applications that undergo restrictions in
terms of bandwidth.
Encoding rules describe how the values defined in ASN.1 should be
encoded for transmission regardless of machine, programming
language, or how it is represented in an application program.
ASN.1's encodings are more streamlined than many competing
notations, enabling rapid and reliable transmission of extensible
messages, this is an advantage for wireless broadband.
Because ASN.1 has been an international standard since 1984, its
encoding rules are mature and have a long track record of reliability and
interoperability.
ASN.1 is widely used in industry sectors where efficient (low-bandwidth,
55. ASN.1's abstract syntax is similar in form to that of any high level programming language.
For example, consider the following C structure:
struct Student {
char name[50]; /* ``Foo Bar'' */
int grad; /* Grad student? (yes/no) */
float gpa; /* 1.1 */
int id; /* 1234567890 */
char bday[8]; /* mm/dd/yy */
}
Its ASN.1 counterpart is:
Student ::= SEQUENCE {
name OCTET STRING, -- 50 characters
grad BOOLEAN, -- comments preceded
gpa REAL, -- by ``--''
id INTEGER,
bday OCTET STRING -- birthday
}
56.
ASN.1 has been adopted in the communications protocol specification of
Telecommunications, including 3GPP mobile phones
Intelligent Transport Systems ITS
Internet voice communications technology in the VoIP
Multimedia standards
Security-related systems, including smart-cards and certificates - the
basis for e-commerce
Embedded systems communications
Air traffic control
57.
The eXternal Data Representation (XDR) is a standard for the description and
encoding of data. XDR uses a language to describe data formats, but the
language is used only for describing data and is not a programming
language. Protocols such as Remote Procedure Call (RPC) and the Network
File System (NFS) use XDR to describe their data formats.
XDR is an alternative to ASN.1. XDR is much simpler than ASN.1, but less
powerful. For instance:
◦ XDR uses implicit typing. Communicating peers must know the type of any
exchanged data. In contrast, ASN.1 uses explicit typing; it includes type
information as part of the transfer syntax.
◦ In XDR, all data is transferred in units of 4 bytes. Numbers are transferred
in network order, most significant byte first.
◦ Strings consist of a 4 byte length, followed by the data (and perhaps
padding in the last byte). Contrast this with ASN.1.
◦ Defined types include: integer, enumeration, boolean, floating point, fixed
length array, structures, plus others.
One advantage that XDR has over ASN.1 is that current implementations of
ASN.1 execute significantly slower than XDR.
58. there is a user named "john" who wants to store his lisp program
"sillyprog" that contains just the data "(quit)". His file would be
encoded as follows:
OFFSET
HEX BYTES
ASCII
------ -------------0
00 00 00 09
....
4
73 69 6c 6c
sill
8
79 70 72 6f
ypro
characters ...
12
67 00 00 00 g...
16
00 00 00 02
....
20
00 00 00 04
....
24
6c 69 73 70
lisp
28
00 00 00 04
....
32
6a 6f 68 6e
john
36
00 00 00 06
....
40
28 71 75 69
(qui
44
74 29 00 00
t)..
COMMENTS
----------------------- length of filename = 9
-- filename characters
-- ... and more
------
... and 3 zero-bytes of fill
filekind is EXEC = 2
length of interpretor = 4
interpretor characters
length of owner = 4
-- owner characters
-- length of file data = 6
-- file data bytes ...
-- ... and 2 zero-bytes of fill
59.
MIME (Multipurpose Internet Mail
Extensions) is a standard in order to expand
upon the limited capabilities of email, and
in particular to allow documents (such as
images, sound, and text) to be inserted in a
message.
60. MIME adds the following features to email
service:
Be able to send multiple attachments with a
single message;
Unlimited message length;
Use of character sets other than ASCII code;
Use of rich text (layouts, fonts, colors, etc)
Binary attachments (executable, images,
audio or video files, etc.), which may be
divided if needed.
61. MIME uses special header directives to describe the format
used in a message body, so that the email client can interpret
it correctly:
MIME-Version: This is the version of the MIME standard
used in the message. Currently only version 1.0 exists.
Content-type: Describes the data's type and subtype. It
can include a "charset" parameter, separated by a semicolon, defining which character set to use.
Content-Transfer-Encoding: Defines the encoding used in
the message body
Content-ID: Represents a unique identification for each
message segment
Content-Description: Gives additional information about
the message content.
Content-Disposition: Defines the attachment's settings, in
particular the name associated with the file, using the
attribute filename.
62.
63.
64.
Encryption is a method used to enhance the
security of a file or message by scrambling
the contents so that it can be read only by
someone who has the right key to
unscramble it. For example, the information
used for transaction such as purchasing
online (e.g address, phone number, and
credit card number) is usually encrypted to
help keep it safe.
65.
66.
Symmetric keys- only one, same key used to
encrypt and decrypt information transmitted.
67.
Asymmetric keys- use receiver’s public key to
encrypt and receiver’s private key to decrypt.
68.
Preserve confidentiality of the file or
message.
Save money on extra protection software as
the machine that uses the encrypted message
does not have to be secured.
69.
If the key to unlock the encrypted file is lost
then the data is no longer protected and
could also be lost.
Overall performance of the machine that use
the data will decrease since it takes a lot of
energy, processing and computer power to
do the encryption process.
Difficult to use the encrypted message as
some limitations have been placed on it.