This document provides an overview of digital image processing and image compression techniques. It defines what a digital image is, discusses the advantages and disadvantages of digital images over analog images. It describes the fundamental steps in digital image processing as well as types of data redundancy that can be exploited for image compression, including coding, interpixel, and psychovisual redundancy. Common image compression models and lossless compression techniques like Lempel-Ziv-Welch coding are also summarized.
This slide gives you the basic understanding of digital image compression.
Please Note: This is a class teaching PPT, more and detail topics were covered in the classroom.
A description about image Compression. What are types of redundancies, which are there in images. Two classes compression techniques. Four different lossless image compression techiques with proper diagrams(Huffman, Lempel Ziv, Run Length coding, Arithmetic coding).
This slide gives you the basic understanding of digital image compression.
Please Note: This is a class teaching PPT, more and detail topics were covered in the classroom.
A description about image Compression. What are types of redundancies, which are there in images. Two classes compression techniques. Four different lossless image compression techiques with proper diagrams(Huffman, Lempel Ziv, Run Length coding, Arithmetic coding).
its very useful for students.
Sharpening process in spatial domain
Direct Manipulation of image Pixels.
The objective of Sharpening is to highlight transitions in intensity
The image blurring is accomplished by pixel averaging in a neighborhood.
Since averaging is analogous to integration.
Prepared by
M. Sahaya Pretha
Department of Computer Science and Engineering,
MS University, Tirunelveli Dist, Tamilnadu.
Digital image processing is the use of computer algorithms to perform image processing on digital images. As a subcategory or field of digital signal processing, digital image processing has many advantages over analog image processing.
Presentation given in the Seminar of B.Tech 6th Semester during session 2009-10 By Paramjeet Singh Jamwal, Poonam Kanyal, Rittitka Mittal and Surabhi Tyagi.
its very useful for students.
Sharpening process in spatial domain
Direct Manipulation of image Pixels.
The objective of Sharpening is to highlight transitions in intensity
The image blurring is accomplished by pixel averaging in a neighborhood.
Since averaging is analogous to integration.
Prepared by
M. Sahaya Pretha
Department of Computer Science and Engineering,
MS University, Tirunelveli Dist, Tamilnadu.
Digital image processing is the use of computer algorithms to perform image processing on digital images. As a subcategory or field of digital signal processing, digital image processing has many advantages over analog image processing.
Presentation given in the Seminar of B.Tech 6th Semester during session 2009-10 By Paramjeet Singh Jamwal, Poonam Kanyal, Rittitka Mittal and Surabhi Tyagi.
Image compression: Techniques and ApplicationNidhi Baranwal
This presentation involves a mathematical view of image compression having a brief introduction of its theory,major techniques along with their algorithm and examples.
This is the basic introductory presentation for beginners. It gives you the idea about what is image processing means. The presentation consists of introduction to digital image processing, image enhancement, image filtering, finding an image edge, image analysis, tools for image processing and finally some application of digital image processing.
This slides about brief Introduction to Image Restoration Techniques. How to estimate the degradation function, noise models and its probability density functions.
A Numerical Investigation of Breast Compression: Lesion ModelingWaqas Tariq
Researchers have developed finite element (FE) models from preoperative medical images to simulate intraoperative breast compression. Applications for these FE models include mammography, non-rigid image registration, and MRI guided biopsy. Efficient FE breast models have been constructed that model suspect lesions as a single element or node within the FE breast mesh. At the expense of efficiency, other researchers have modeled the actual lesion geometry within the FE breast mesh (conformal breast-lesion mesh). Modeling the actual lesion geometry provides lesion boundary spatial information, which is lost in FE breast models that model suspect lesions as a single element or node within the FE breast mesh. In this paper, we used a commercial finite element analysis (FEA) program to construct a biomechanical breast model from patient specific MR volumes. A laterally situated lesion was identified in the diagnostic MRI. We used the FE model to simulate breast compression during an MRI guided biopsy. Our objective was to investigate the efficacy of independently discretizing the breast and lesion geometries and using a kinematic constraint to associate the lesion nodes to the nodes in the breast mesh based on their isoparametric position. This study showed that it is possible to construct an accurate and efficient FE breast model that considers the actual lesion geometry. With 61 mm of breast compression, the lesion centroid was localized to within 3.8 mm of its actual position. As compared to a conformal breast-lesion FE mesh, the element count was also reduced by 53%. These findings suggest that it is possible to predict the position of a suspect lesion\'s centroid and boundary within clinical time constraints (< 30 minutes).
LaWzer, a still image codec designed by L. GuillemotLudovic Guillemot
LAWZER, A STILL IMAGE CODEC DESIGNED BY L. GUILLEMOT
THE LAWZER PROJECT MATERIALIZES 10 YEARS OF SCIENTIFIC CONTRIBUTIONS IN IMAGE COMPRESSION AND SOURCE CODING.
LaWzer is a personal project, a showcase for the skills and interests of its designer, Ludovic Guillemot, and a way to celebrate a regular activity in research with academic partners.
LAWZER IS A STILL IMAGE CODEC allowing to efficiently encode digital images at the desired size. LaWzer 1.0 has been developed (in C++) between July 2013 and August 2014. It includes works previously published as well as new coding algorithms specially designed for LaWzer.
LAWZER TECHNOLOGY IS BASED ON LATTICE VECTOR QUANTIZATION (LVQ) AND WAVELET TRANSFORM.
In (very) few words, LVQ is a blockwise approach i.e. the basic elements of the coding process are vectors of data. This is a highly efficient approach but usually renowned for the algorithmic complexity involved by operations on n-dimensional objects. LaWzer focuses on reversing this reputation while still improving the rate-distortion trade-off.
Because compression is a must in the big data era, we believe that some applications could be in need of a fast and efficient vectorwise compression scheme.
LAWZER V1.0 IS A PROOF OF CONCEPT.
The most challenging task was to develop a real-life codec, i.e. showing that processing n-dimensional objects was no more a technical hurdle for the execution speed. Various algorithms have been designed and implemented to reach this goal: quantizers and lossless coding, statistical estimation in wavelet domain, numerical optimization, etc.
WORK IN PROGRESS:
The first phase of the project is now achieved. One have a complete compression scheme which produces compressed files (“.law”) with short execution times (short means of the order of few seconds for very big images). But much more can be done, further versions will include enhanced coding algorithm, new functionalities (color, watermarking, etc.), code optimization, etc.
An introduction to image processing in addition to explaining the concept of group operations.
Find me on:
AFCIT
http://www.afcit.xyz
YouTube
https://www.youtube.com/channel/UCuewOYbBXH5gwhfOrQOZOdw
Google Plus
https://plus.google.com/u/0/+AhmedGadIT
SlideShare
https://www.slideshare.net/AhmedGadFCIT
LinkedIn
https://www.linkedin.com/in/ahmedfgad/
ResearchGate
https://www.researchgate.net/profile/Ahmed_Gad13
Academia
https://www.academia.edu/
Google Scholar
https://scholar.google.com.eg/citations?user=r07tjocAAAAJ&hl=en
Mendelay
https://www.mendeley.com/profiles/ahmed-gad12/
ORCID
https://orcid.org/0000-0003-1978-8574
StackOverFlow
http://stackoverflow.com/users/5426539/ahmed-gad
Twitter
https://twitter.com/ahmedfgad
Facebook
https://www.facebook.com/ahmed.f.gadd
Pinterest
https://www.pinterest.com/ahmedfgad/
Finite element analysis in orthodontics/ /certified fixed orthodontic courses...Indian dental academy
The Indian Dental Academy is the Leader in continuing dental education , training dentists in all aspects of dentistry and offering a wide range of dental certified courses in different formats.
IRJET-Lossless Image compression and decompression using Huffman codingIRJET Journal
S.Anitha"Lossless image compression and decompression using huffman coding", International Research Journal of Engineering and Technology (IRJET), Volume2,issue-01 April 2015.e-ISSN:2395-0056, p-ISSN:2395-0072. www.irjet.net
Abstract
This paper propose a novel Image compression based on the Huffman encoding and decoding technique. Image files contain some redundant and inappropriate information. Image compression addresses the problem of reducing the amount of data required to represent an image. Huffman encoding and decoding is very easy to implement and it reduce the complexity of memory. Major goal of this paper is to provide practical ways of exploring Huffman coding technique using MATLAB .
Abstract
In today’s world of electronic revolution, digital images play an important role in consumer electronics. The need for storing images grows day to day. Compression of digital images plays an important role in storing the images. Due to compression the memory occupied by the images gets reduced and so that we can store more images in the available memory space. Also compression results in encoding of images which itself offer a security on transaction. This paper deals with a clear note on various aspects regarding compression and various compression techniques used and their implementation.
Keywords: digital image, compression, data redundancy, compression techniques.
Jpeg image compression using discrete cosine transform a surveyIJCSES Journal
Due to the increasing requirements for transmission of images in computer, mobile environments, the
research in the field of image compression has increased significantly. Image compression plays a crucial
role in digital image processing, it is also very important for efficient transmission and storage of images.
When we compute the number of bits per image resulting from typical sampling rates and quantization
methods, we find that Image compression is needed. Therefore development of efficient techniques for
image compression has become necessary .This paper is a survey for lossy image compression using
Discrete Cosine Transform, it covers JPEG compression algorithm which is used for full-colour still image
applications and describes all the components of it.
A Review on Image Compression using DCT and DWTIJSRD
Image Compression addresses the matter of reducing the amount of data needed to represent the digital image. There are several transformation techniques used for data compression. Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) is mostly used transformation. The Discrete cosine transform (DCT) is a method for transform an image from spatial domain to frequency domain. DCT has high energy compaction property and requires less computational resources. On the other hand, DWT is multi resolution transformation. The research paper includes various approaches that have been used by different researchers for Image Compression. The analysis has been carried out in terms of performance parameters Peak signal to noise ratio, Bit error rate, Compression ratio, Mean square error. and time taken for decomposition and reconstruction.
A Strategic Approach: GenAI in EducationPeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
Francesca Gottschalk - How can education support child empowerment.pptxEduSkills OECD
Francesca Gottschalk from the OECD’s Centre for Educational Research and Innovation presents at the Ask an Expert Webinar: How can education support child empowerment?
The Roman Empire A Historical Colossus.pdfkaushalkr1407
The Roman Empire, a vast and enduring power, stands as one of history's most remarkable civilizations, leaving an indelible imprint on the world. It emerged from the Roman Republic, transitioning into an imperial powerhouse under the leadership of Augustus Caesar in 27 BCE. This transformation marked the beginning of an era defined by unprecedented territorial expansion, architectural marvels, and profound cultural influence.
The empire's roots lie in the city of Rome, founded, according to legend, by Romulus in 753 BCE. Over centuries, Rome evolved from a small settlement to a formidable republic, characterized by a complex political system with elected officials and checks on power. However, internal strife, class conflicts, and military ambitions paved the way for the end of the Republic. Julius Caesar’s dictatorship and subsequent assassination in 44 BCE created a power vacuum, leading to a civil war. Octavian, later Augustus, emerged victorious, heralding the Roman Empire’s birth.
Under Augustus, the empire experienced the Pax Romana, a 200-year period of relative peace and stability. Augustus reformed the military, established efficient administrative systems, and initiated grand construction projects. The empire's borders expanded, encompassing territories from Britain to Egypt and from Spain to the Euphrates. Roman legions, renowned for their discipline and engineering prowess, secured and maintained these vast territories, building roads, fortifications, and cities that facilitated control and integration.
The Roman Empire’s society was hierarchical, with a rigid class system. At the top were the patricians, wealthy elites who held significant political power. Below them were the plebeians, free citizens with limited political influence, and the vast numbers of slaves who formed the backbone of the economy. The family unit was central, governed by the paterfamilias, the male head who held absolute authority.
Culturally, the Romans were eclectic, absorbing and adapting elements from the civilizations they encountered, particularly the Greeks. Roman art, literature, and philosophy reflected this synthesis, creating a rich cultural tapestry. Latin, the Roman language, became the lingua franca of the Western world, influencing numerous modern languages.
Roman architecture and engineering achievements were monumental. They perfected the arch, vault, and dome, constructing enduring structures like the Colosseum, Pantheon, and aqueducts. These engineering marvels not only showcased Roman ingenuity but also served practical purposes, from public entertainment to water supply.
Honest Reviews of Tim Han LMA Course Program.pptxtimhan337
Personal development courses are widely available today, with each one promising life-changing outcomes. Tim Han’s Life Mastery Achievers (LMA) Course has drawn a lot of interest. In addition to offering my frank assessment of Success Insider’s LMA Course, this piece examines the course’s effects via a variety of Tim Han LMA course reviews and Success Insider comments.
Introduction to AI for Nonprofits with Tapp NetworkTechSoup
Dive into the world of AI! Experts Jon Hill and Tareq Monaur will guide you through AI's role in enhancing nonprofit websites and basic marketing strategies, making it easy to understand and apply.
Palestine last event orientationfvgnh .pptxRaedMohamed3
An EFL lesson about the current events in Palestine. It is intended to be for intermediate students who wish to increase their listening skills through a short lesson in power point.
Operation “Blue Star” is the only event in the history of Independent India where the state went into war with its own people. Even after about 40 years it is not clear if it was culmination of states anger over people of the region, a political game of power or start of dictatorial chapter in the democratic setup.
The people of Punjab felt alienated from main stream due to denial of their just demands during a long democratic struggle since independence. As it happen all over the word, it led to militant struggle with great loss of lives of military, police and civilian personnel. Killing of Indira Gandhi and massacre of innocent Sikhs in Delhi and other India cities was also associated with this movement.
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
Model Attribute Check Company Auto PropertyCeline George
In Odoo, the multi-company feature allows you to manage multiple companies within a single Odoo database instance. Each company can have its own configurations while still sharing common resources such as products, customers, and suppliers.
Embracing GenAI - A Strategic ImperativePeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
Biological screening of herbal drugs: Introduction and Need for
Phyto-Pharmacological Screening, New Strategies for evaluating
Natural Products, In vitro evaluation techniques for Antioxidants, Antimicrobial and Anticancer drugs. In vivo evaluation techniques
for Anti-inflammatory, Antiulcer, Anticancer, Wound healing, Antidiabetic, Hepatoprotective, Cardio protective, Diuretics and
Antifertility, Toxicity studies as per OECD guidelines
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdfTechSoup
In this webinar you will learn how your organization can access TechSoup's wide variety of product discount and donation programs. From hardware to software, we'll give you a tour of the tools available to help your nonprofit with productivity, collaboration, financial management, donor tracking, security, and more.
2. Outline
2
Introduction
What is Image ?
Digital Image Processing .
Advantage & Disadvantages .
Fundamental Steps in DIP .
Fundamentals
Data Compression & Data Redundancy.
Types of Data Redundancy .
Fidelity Criteria .
Image Compression Models .
Error Free Compression .
Variable Length Coding
LZW
3. Introduction
What is an Image ?
The world around us is 3D while the image obtain through a camera
is 2D,Hence an image can be defined as “A 2D representation of 3D
world. ”
Types of Images
1. Analog : An analog image can be represented as continuous
range of value representing position and intensity.
2. Digital : It is composed of picture element called pixel.
3
4. Digital Image Processing
Digital Image
A Digital Image is a representation of a two-dimensional image as a
finite set of digital values, called picture elements or pixels.
Digital Image Processing
Digital image processing focuses on two major tasks.
• Improvement of pictorial information for human
interpretation.
• Processing of image data for storage, transmission and
representation for autonomous machine perception.
4
5. Advantages of Digital Image
Processing of digital image is faster and cost effective.
It can be effectively stored and transmitted from one place to
another.
When suiting a digital image on can immediately see the image is
good or not.
Digital technique offers a plenty of store for versatile image
manipulation .
5
6. Disadvantages of Digital Image
Misused of copyright has become easier because images can be copy
from the internet by just clicking on the mouse.
Digital images can’t be enlarge on certain size without
compromising on quality .
The memory required for store and processing a good quality of
digital image is very high.
6
7. Types of Digital Image
1. Bit map Image /Monochrome/Binary
Each pixel consist of single bit either 0 or 1,where 0 is black and 1 is
white.
2. Gray-Scale Image
Each pixel consist of eight bit ,each bit range from 0 to 255.
3.Colored Image
It consist of 3-primitive colour Red, Green and Blue ,colored image
consist of 24-bit where 8-bit for red, 8-bit for green and 8-bit for
blue.
7
9. Image Compression
Techniques for reducing the storage required to save an image or the
bandwidth required to transmit it.
Image compression address the problem of reducing the amount of
data required to represent a digital image with no significant loss of
information.
Image = Information + Redundant Data
Image compression techniques fall into Two categories:
Lossless : Information preserving ,Low compression ratios .
Lossy : Not information preserving ,High compression ratios .
9
10. Fundamentals
Data compression: Data compression refers to the process of
reducing the amount of data required to represent a given quantity
of information.
Data and Information are not the same thing ,Data is the means by
which information is conveyed.
Data compression aims to reduce the amount of data required to
represent a given quantity of information while preserving as much
information as possible.
10
11. The same amount of information can be represented by various
amount of data.
Example:
Ex.1: Your Brother, Ashish, will meet you at santacruz Airport in
mumbai at 5 minutes past 6:00 pm tomorrow night.
Ex.2: Your Brother will meet you at santacruz Airport at 5 minutes
past 6:00 pm tomorrow night.
Ex.3: Ashish will meet you at santacruz at 6:00 pm tomorrow night .
Fundamentals
11
12. Fundamentals
Data Redundancy
It contains data (or words) that either provide no relevant
information or simply restate that which is already known.
The Relative data redundancy RD of the first data set ,is defined by :
Where, RD is Relative Data Redundancy ,CR is Compression ratio.
compression
R
D
C
R
1
1
12
13. CR refers to the compression ratio and is defined by:
where ,n1 and n2 denoting the information-carrying units in two
data sets that represent the same information/image.
If n1 = n2 , then CR =1 and RD=0, indicating that the first
representation of the information contains no redundant data.
When n2<<n1 , CR large value(∞) and RD 1. Larger values of C
indicate better compression .
Fundamentals
2
1
n
n
CR
13
14. Fundamentals
Three Principal type of Data Redundancies used in Image
Compression :
(1) Coding Redundancy
(2) Interpixel Redundancy
(3) Psychovisual Redundancy
14
15. Coding Redundancy
Code: A list of symbols (letters, numbers, bits etc) .
Code word: A sequence of symbols used to represent a piece of
information or an event (e.g., gray levels).
Code word length: Number of symbols in each code word
15
16. The gray level histogram of an image can be used in construction of
codes to reduce the data used to represent it. Given the normalized
histogram of a gray level image where,
rk is the discrete random variable defined in the interval [0,1]
pr(k) is the probability of occurrence of rk , L is the number of gray
levels
nk is the number of times that kth gray level appears , n is the total
number of pixels in image.
pr(rk) = nk / n k = 0, 1, 2, ………….L-1
Coding Redundancy
16
17. Average number of bits required to represent each pixel is given by:
L - 1
Lavg = Σ l(rk) pr(rk)
k = 0
Where, l(rk) is the number of bits used to represent each value of
rk.
Coding Redundancy
17
18. Interpixel Redundancy
It is also called Spatial & Temporal redundancy.
Because the pixels of most 2D intensity arrays are correlated
spatially(i.e. Each pixel is similar to or dependent on neighbor
pixel), information is replicated unnecessarily .
This type of redundancy is related with the Interpixel correlations
within an image.
In video sequence, temporally correlated pixels also duplicate
information. The codes used to represent the gray levels of each
image have nothing to do with correlation between pixel.
18
19. Psychovisual Redundancy
Certain information has relatively less importance for the quality of
image perception. This information is said to be psychovisually
redundant.
Unlike coding and interpixel redundancies, the Psychovisual
redundancy is related with the real/quantifiable visual information.
Its elimination results a loss of quantitative information. However
psychovisually the loss is negligible.
Removing this type of redundancy is a lossy process and the lost
information cannot be recovered.
The method used to remove this type of redundancy is called
quantization which means the mapping of a broad range of input
values to a limited number of output values.
19
20. Fidelity criteria
When lossy compression techniques are employed, the
decompressed image will not be identical to the original image. In
such cases , we can define fidelity criteria that measure the
difference between this two images.
Two general classes of criteria are used :
(1) Objective fidelity criteria
(2) Subjective fidelity criteria
How close is to ?
20
21. Objective Fidelity Criteria
When information loss can be expressed as a mathematical
function of input & output of a compression process.
E.g.. RMS error between 2 images.
Error between two images
e(x, y) = f’(x, y) – f(x, y)
So, total error between two images
M-1 N-1
Σ Σ [f’(x, y) – f(x, y)]
x=0 y= 0
Where image are size of M x N
21
22. Subjective Fidelity Criteria
A Decompressed image is presented to a cross section of viewers
and averaging their evaluations.
It can be done by using an absolute rating scale .
Or
By means of side by side comparisons of f(x, y) & f’(x, y).
Side by Side comparison can be done with a scale such as {-3, -2, -1,
0, 1, 2, 3} to represent the subjective valuations {much worse, worse,
slightly worse, the same, slightly better, better, much better}
respectively.
22
23. Image Compression Model
The image compression system is composed of 2 distinct functional
component: an encoder & a decoder.
Encoder performs Compression while Decoder performs
Decompression. Encoder is used to remove the redundancies
through a series of 3 independent operations.
Both operations can be performed in Software, as in case of Web
browsers & many commercial image editing programs. Or in a
combination of hardware & firmware, as in DVD Players.
A codec is a device which performs coding & decoding.
23
25. Mapper
25
It transforms input data in a way that facilitates reduction of
interpixel redundancies.
It is reversible .
It may / may not reduce the amount of data to represent image.
Ex. Run Length coding
In video applications, mapper uses previous frames to remove
temporal redundancies.
Mapper Quantizer
Symbol
Encoder
Channel
No Interpixel
redundancies(Reversible)
26. Quantizer
26
It keeps irrelevant information out of compressed representations.
This operation is irreversible.
It must be omitted when error free compression is desired.
The visual quality of the output can vary from frame to frame as a
function of image content.
Mapper Quantizer
Symbol
Encoder
Channel
No Psychovisual
redundancies(non-
reversible)
27. Symbol Encoder
27
Generates a fixed or variable length code to represent the Quantizer
output and maps the output in accordance with the code.
Shortest code words are assigned to the most frequently occurring
Quantizer output values. Thus minimizing coding redundancy.
It is reversible. Upon its completion, the input image has been
processed for the removal of all 3 redundancies.
Mapper Quantizer
Symbol
Encoder
Channel
No Coding
redundancies(Reversible)
28. Decoding or Decompression
Process
28
Inverse steps are performed .
Quantization results in irreversible loss, an inverse Quantizer
block is not included in the decoder block.
Channel
Symbol
Decoder
Inverse
Mapper
Decoder
29. Error-free compression
• Error-free compression is generally composed of two relatively
independent operations: (1) reduce the interpixel redundancies and
(2) introduce a coding method to reduce the coding redundancies.
LZW
coding
Bit-plane
coding
Variable-
length coding
Error-free compression
Lossless
Predictive
coding
29
30. Variable-length Coding
The coding redundancy can be minimized by using a variable-
length coding method where the shortest codes are assigned to
most probable gray levels.
The most popular variable-length coding method is the Huffman
Coding. Huffman Coding: The Huffman coding involves the
following 2 steps.
1) Create a series of source reductions by ordering the probabilities
of the symbols and combining the lowest probability symbols into
a single symbol and replace in the next source reduction.
2) each Code reduced source starting with the smallest source and
working back to the original source.
30
32. 2) Huffman code assignments:
The first code assignment is done for a2 with the highest
probability and the last assignments are done for a3 and a5 with the
lowest probabilities.
Variable-length Coding
32
33. The shortest codeword (1) is given for the symbol/pixel with the
highest probability (a2). The longest codeword (01011) is given for
the symbol/pixel with the lowest probability (a5).
The average length of the code is given by:
Variable-length Coding
33
34. It is uniquely decodable. Because any string of code symbols can
be decoded by examining individual symbols of string from left to
right.
Ex. 01010 011 1 1 00
First valid code: 01010 – a3, 011 – a1,
Thus, completely decoding the message, we get, a3a1a2a2a6
Slower than Huffman coding but typically achieves better
compression.
Variable-length Coding
34
35. Lempel–Ziv–Welch (LZW)
Lempel–Ziv–Welch (LZW)is a universal lossless data
compression algorithm created by Abraham Lempel, Jacob Ziv,
and Terry Welch.
The key to LZW is building a dictionary of sequences of symbols
(strings) as the data is read and compressed.
Whenever a string is repeated, it is replaced with a single code word
in the output.
At decompression time, the same dictionary is created and used to
replace code words with the corresponding strings.
35
36. Lempel–Ziv–Welch (LZW)
A codebook (or dictionary) needs to be constructed. LZW
compression has been integrated into a several images file
formats, such as GIF and TIFF and PDF.
Initially, the first 256 entries of the dictionary are assigned to
the gray levels 0,1,2,..,255 (i.e., assuming 8 bits/pixel)
Consider a 4x4, 8 bit image
39 39 126 126
39 39 126 126
39 39 126 126
39 39 126 126
Dictionary Location Entry
0 0
1 1
. .
255 255
256 -
511 -
Initial Dictionary
36
37. Lempel–Ziv–Welch (LZW)
As the encoder examines image pixels, gray level sequences
(i.e., blocks) that are not in the dictionary are assigned to a
new entry.
- Is 39 in the dictionary……..Yes
- What about 39-39………….No
* Add 39-39 at location 256
Dictionary Location Entry
0 0
1 1
. .
255 255
256
511 -
39-39
39 39 126 126
39 39 126 126
39 39 126 126
39 39 126 126
37
38. 39 39 126 126
39 39 126 126
39 39 126 126
39 39 126 126
Concatenated Sequence: CS = CR + P
else:
(1) Output D(CR)
(2) Add CS to D
(3) CR=P
If CS is found:
(1) No Output
(2) CR=CS
(CR) (P)
CR = empty
Lempel–Ziv–Welch (LZW)
38