Digital Image Processing denotes the process of digital images with the use of digital computer. Digital images are contains various types of noises which are reduces the quality of images. Noises can be removed by various enhancement techniques. Image smoothing is a key technology of image enhancement, which can remove noise in images.
Digital Image Processing denotes the process of digital images with the use of digital computer. Digital images are contains various types of noises which are reduces the quality of images. Noises can be removed by various enhancement techniques. Image smoothing is a key technology of image enhancement, which can remove noise in images.
This slides about brief Introduction to Image Restoration Techniques. How to estimate the degradation function, noise models and its probability density functions.
its very useful for students.
Sharpening process in spatial domain
Direct Manipulation of image Pixels.
The objective of Sharpening is to highlight transitions in intensity
The image blurring is accomplished by pixel averaging in a neighborhood.
Since averaging is analogous to integration.
Prepared by
M. Sahaya Pretha
Department of Computer Science and Engineering,
MS University, Tirunelveli Dist, Tamilnadu.
After an image has been segmented into regions ; the resulting pixels is usually is represented and described in suitable form for further computer processing.
Biological control systems - Time Response Analysis - S.Mathankumar-VMKVECMathankumar S
Biological control systems - Time Response Analysis - Step and Impulse responses of first order and second order systems, Determination of time domain specifications of first and second order systems from its output responses.
This slides about brief Introduction to Image Restoration Techniques. How to estimate the degradation function, noise models and its probability density functions.
its very useful for students.
Sharpening process in spatial domain
Direct Manipulation of image Pixels.
The objective of Sharpening is to highlight transitions in intensity
The image blurring is accomplished by pixel averaging in a neighborhood.
Since averaging is analogous to integration.
Prepared by
M. Sahaya Pretha
Department of Computer Science and Engineering,
MS University, Tirunelveli Dist, Tamilnadu.
After an image has been segmented into regions ; the resulting pixels is usually is represented and described in suitable form for further computer processing.
Biological control systems - Time Response Analysis - S.Mathankumar-VMKVECMathankumar S
Biological control systems - Time Response Analysis - Step and Impulse responses of first order and second order systems, Determination of time domain specifications of first and second order systems from its output responses.
IRJET-Lossless Image compression and decompression using Huffman codingIRJET Journal
S.Anitha"Lossless image compression and decompression using huffman coding", International Research Journal of Engineering and Technology (IRJET), Volume2,issue-01 April 2015.e-ISSN:2395-0056, p-ISSN:2395-0072. www.irjet.net
Abstract
This paper propose a novel Image compression based on the Huffman encoding and decoding technique. Image files contain some redundant and inappropriate information. Image compression addresses the problem of reducing the amount of data required to represent an image. Huffman encoding and decoding is very easy to implement and it reduce the complexity of memory. Major goal of this paper is to provide practical ways of exploring Huffman coding technique using MATLAB .
Comparison of different Fingerprint Compression Techniquessipij
The important features of wavelet transform and different methods in compression of fingerprint images have been implemented. Image quality is measured objectively using peak signal to noise ratio (PSNR) and mean square error (MSE).A comparative study using discrete cosine transform based Joint Photographic Experts Group(JPEG) standard , wavelet based basic Set Partitioning in Hierarchical trees(SPIHT) and Modified SPIHT is done. The comparison shows that Modified SPIHT offers better compression than basic SPIHT and JPEG. The results will help application developers to choose a good wavelet compression system for their applications.
The growing trend of online image sharing and downloads today mandate the need for better encoding and
decoding scheme. This paper looks into this issue of image coding. Multiple Description Coding is an
encoding and decoding scheme that is specially designed in providing more error resilience for data
transmission. The main issue of Multiple Description Coding is the lossy transmission channels. This work
attempts to address the issue of re-constructing high quality image with the use of just one descriptor
rather than the conventional descriptor. This work compare the use of Type I quantizer and Type II
quantizer. We propose and compare 4 coders by examining the quality of re-constructed images. The 4
coders are namely JPEG HH (Horizontal Pixel Interleaving with Huffman Coding) model, JPEG HA
(Horizontal Pixel Interleaving with Arithmetic Encoding) model, JPEG VH (Vertical Pixel Interleaving
with Huffman Encoding) model, and JPEG VA (Vertical Pixel Interleaving with Arithmetic Encoding)
model. The findings suggest that the use of horizontal and vertical pixel interleavings do not affect the
results much. Whereas the choice of quantizer greatly affect its performance.
PERFORMANCE EVALUATION OF JPEG IMAGE COMPRESSION USING SYMBOL REDUCTION TECHN...cscpconf
Lossy JPEG compression is a widely used compression technique. Normally the JPEG technique
uses two process quantization, which is lossy process and entropy encoding, which is considered
lossless process. In this paper, a new technique has been proposed by combining the JPEG
algorithm and Symbol Reduction Huffman technique for achieving more compression ratio. The
symbols reduction technique reduces the number of symbols by combining together to form a new
symbol. As a result of this technique the number of Huffman code to be generated also reduced.
The result shows that the performance of standard JPEG method can be improved by proposed method. This hybrid approach achieves about 20% more compression ratio than the Standard JPEG.
REGION OF INTEREST BASED COMPRESSION OF MEDICAL IMAGE USING DISCRETE WAVELET ...ijcsa
Image abbreviation is utilized for reducing the size of a file without demeaning the quality of the image to an objectionable level. The depletion in file size permits more images to be deposited in a given number of spaces. It also minimizes the time necessary for images to be transferred. There are different ways of abbreviating image files. For the use of Internet, the two most common abbreviated graphic image formats are the JPEG formulation and the GIF formulation. The JPEG procedure is more often utilized or
photographs, while the GIF method is commonly used for logos, symbols and icons but at the same time
they are not preferred as they use only 256 colors. Other procedures for image compression include the
utilization of fractals and wavelets. These procedures have not profited widespread acceptance for the
utilization on the Internet. Abbreviating an image is remarkably not similar than the compressing raw
binary data. General-purpose abbreviation techniques can be utilized to compress images, the obtained
result is less than the optimal. This is because of the images have certain analytical properties, which can
be exploited by encoders specifically designed only for them. Also, some of the finer details of the image
can be renounced for the sake of storing a little more bandwidth or deposition space. In the paper,
compression is done on medical image and the compression technique that is used to perform compression
is discrete wavelet transform and discrete cosine transform which compresses the data efficiently without
reducing the quality of an image
The intention of image compression is to discard worthless data from image so as to shrink the quantity of data bits favored for image depiction, to lessen the storage space, broadcast bandwidth and time. Likewise, data hiding convenes scenarios by implanting the unfamiliar data into a picture in invisibility manner. The review offers, a method of image compression approaches by using DWT transform employing steganography scheme together in combination of SPIHT to compress an image.
Medical images compression: JPEG variations for DICOM standardJose Pinilla
This is a report that introduces the technical features of the different image compression schemes found in the DICOM standar for medical imaging archiving and communication.
This presentation is about JPEG compression algorithm. It briefly describes all the underlying steps in JPEG compression like picture preparation, DCT, Quantization, Rendering and Encoding.
Compression is playing a vital role in data transfer. Hence, Digital camera uses JPEG standard to compress the captured image. Hence, it reduces data storage requirements. Here, we proposed FPGA based JPEG encoder. The processing system is coupled with DCT and then it is quantized and then it is prepared for entropy coding to form a JPEG encoder
Modified Skip Line Encoding for Binary Image Compressionidescitation
Image Compression is an important issue in
Internet, mobile communication, digital library, digital
photography, multimedia, teleconferencing and other
applications. Application areas of Image Compression would
focus on the problem of optimizing storage space and
transmission bandwidth. In this paper, a modified form of skip
line encoding is proposed to further reduce the redundancy in
the image. The performance is found to be better than the
skip-line encoding.
Jpeg image compression using discrete cosine transform a surveyIJCSES Journal
Due to the increasing requirements for transmission of images in computer, mobile environments, the
research in the field of image compression has increased significantly. Image compression plays a crucial
role in digital image processing, it is also very important for efficient transmission and storage of images.
When we compute the number of bits per image resulting from typical sampling rates and quantization
methods, we find that Image compression is needed. Therefore development of efficient techniques for
image compression has become necessary .This paper is a survey for lossy image compression using
Discrete Cosine Transform, it covers JPEG compression algorithm which is used for full-colour still image
applications and describes all the components of it.
Design of Image Compression Algorithm using MATLABIJEEE
This paper gives the idea of recent developments in the field of image security and improvements in image security. Images are used in many applications and to provide
image security using image encryption and authentication.
Image encryption techniques scramble the pixels of the image and decrease the correlation among the pixels, such that the encrypted image cannot be accessed by unauthorized user.
Similar to Digital Image Processing - Image Compression (20)
Business environment scanning market survey tools and techniques by mathankumarMathankumar S
Business Environment Scanning Market Survey Tools and Techniques (Its very clearly briefed about Entrepreneurship,
Entrepreneur, Need for Entrepreneur, Common Myths about Entrepreneurs, Benefits of Being an Entrepreneur, Innovation, Elements of Innovation, Creativity, Motivation, How should Engineer Think?, How to Identify and Motivate a Student?, Science Club, Entrepreneurship Development, Ministry of Skill Development and Entrepreneurship (MSDE), Pradhan Mantri Kaushal Vikas Yojana (PMKVY), Udyamita, Design Clinic Scheme for Design Expertise to MSME Manufacturing Sector, MSME Salient Features, MSME Schemes,
Prototype, Market Survey, MARKETING SUPPORT TO MSMEs, MARKET SURVEY FOR COMMERCIALIZATION, STARTING A NEW VENTURE, How to become a Successful Entrepreneur
Biomedical Circuits & Networks - Transient Analysis and Two Port NetworksMathankumar S
Biomedical Circuits & Networks - Transient Analysis and Two Port Networks (It includes AC and DC Response, R-L-C Circuits, time constant, Symmetrical networks, Z-Parameter, Y-Parameter ABCD Parameter and h-Parameter)
Biomedical Circuits & Networks - Network synthesis and filter designMathankumar S
Biomedical Circuits & Networks - Network synthesis and filter design (It includes cauer form, poles and zeros, Impedance function, Hurwitz polynominal, Positive real function, Types of filters, Initial value theorem, Butterworth filter, Chebyshev filter, Chebyshev approximation, Low pass filter, T and pie network, m -derived low pass filter, transfer function, selectivity parameter & discrimination parameter)
Biomedical Technical Skill Development - Questions (Multi Choice Type)Mathankumar S
Biomedical Technical Skill Development - Questions (Objective type Questions for Biomedical Engineering & Medical Electronics) - Students can able to get ideas about fundamentals
Biomedical Control Systems - BIOMEDICAL APPLICATIONS (Short Questions & Answers)Mathankumar S
Biomedical control systems - BIOMEDICAL APPLICATIONS (Short Questions & Answers) - ITS DEALS WITH Examples of Biological control Systems: Cardiovascular Control System, Endocrine Control Systems, Pupil Control System, Skeletal Muscle Servomechanism, Oculo - motor system, sugar level Control Mechanism. Temperature control, Blood pressure control.
Biomedical Control Systems - THE CONCEPT OF STABILITY & ROOT LOCUS TECHNIQUE ...Mathankumar S
Biomedical control systems - THE CONCEPT OF STABILITY & ROOT LOCUS TECHNIQUE (short Questions & Answers) - ITS DEALS WITH STABILITY OF THE SYSTEM (ROUTH HURWITZ CRITERION, ROUTH ARRAY), ROOT LOCUS TECHNIQUE, ZEROS & POLES,
Biomedical Control Systems - Time Response Analysis (Short Questions & Answers)Mathankumar S
Biomedical Control Systems - Time Response Analysis (Short Questions & Answers) - Its detailed about Standard Test Signals, Time Response Analysis of First and Second Order Systems, Steady state errors and Error constants, Effects of Adding Zero to a system, Damping System and PD & PID Controller.
Biomedical Control Systems - SYSTEM CONCEPTS (Short Questions & Answers)Mathankumar S
Biomedical Control Systems - SYSTEM CONCEPTS (Short Questions & Answers) - Its Deal with Types of systems, Open loop systems, Closed Loop systems, Effects of feedback, Mathematical Models of Physical systems: Introduction, Differential equations, Transfer functions, Block Diagram Algebra, Signal Flow Graphs.
Biological control systems - System Concepts-Mathankumar.S, VMKVECMathankumar S
Biological control systems - System Concepts- Basic structure of control system - Types of systems - Open loop systems, closed loop systems, Effects of feedback, Block diagram & Signal flow graph, conversion of block diagram to signal flow graph, reduction of block diagram and signal flow graph.
Event Management System Vb Net Project Report.pdfKamal Acharya
In present era, the scopes of information technology growing with a very fast .We do not see any are untouched from this industry. The scope of information technology has become wider includes: Business and industry. Household Business, Communication, Education, Entertainment, Science, Medicine, Engineering, Distance Learning, Weather Forecasting. Carrier Searching and so on.
My project named “Event Management System” is software that store and maintained all events coordinated in college. It also helpful to print related reports. My project will help to record the events coordinated by faculties with their Name, Event subject, date & details in an efficient & effective ways.
In my system we have to make a system by which a user can record all events coordinated by a particular faculty. In our proposed system some more featured are added which differs it from the existing system such as security.
Immunizing Image Classifiers Against Localized Adversary Attacksgerogepatton
This paper addresses the vulnerability of deep learning models, particularly convolutional neural networks
(CNN)s, to adversarial attacks and presents a proactive training technique designed to counter them. We
introduce a novel volumization algorithm, which transforms 2D images into 3D volumetric representations.
When combined with 3D convolution and deep curriculum learning optimization (CLO), itsignificantly improves
the immunity of models against localized universal attacks by up to 40%. We evaluate our proposed approach
using contemporary CNN architectures and the modified Canadian Institute for Advanced Research (CIFAR-10
and CIFAR-100) and ImageNet Large Scale Visual Recognition Challenge (ILSVRC12) datasets, showcasing
accuracy improvements over previous techniques. The results indicate that the combination of the volumetric
input and curriculum learning holds significant promise for mitigating adversarial attacks without necessitating
adversary training.
COLLEGE BUS MANAGEMENT SYSTEM PROJECT REPORT.pdfKamal Acharya
The College Bus Management system is completely developed by Visual Basic .NET Version. The application is connect with most secured database language MS SQL Server. The application is develop by using best combination of front-end and back-end languages. The application is totally design like flat user interface. This flat user interface is more attractive user interface in 2017. The application is gives more important to the system functionality. The application is to manage the student’s details, driver’s details, bus details, bus route details, bus fees details and more. The application has only one unit for admin. The admin can manage the entire application. The admin can login into the application by using username and password of the admin. The application is develop for big and small colleges. It is more user friendly for non-computer person. Even they can easily learn how to manage the application within hours. The application is more secure by the admin. The system will give an effective output for the VB.Net and SQL Server given as input to the system. The compiled java program given as input to the system, after scanning the program will generate different reports. The application generates the report for users. The admin can view and download the report of the data. The application deliver the excel format reports. Because, excel formatted reports is very easy to understand the income and expense of the college bus. This application is mainly develop for windows operating system users. In 2017, 73% of people enterprises are using windows operating system. So the application will easily install for all the windows operating system users. The application-developed size is very low. The application consumes very low space in disk. Therefore, the user can allocate very minimum local disk space for this application.
Cosmetic shop management system project report.pdfKamal Acharya
Buying new cosmetic products is difficult. It can even be scary for those who have sensitive skin and are prone to skin trouble. The information needed to alleviate this problem is on the back of each product, but it's thought to interpret those ingredient lists unless you have a background in chemistry.
Instead of buying and hoping for the best, we can use data science to help us predict which products may be good fits for us. It includes various function programs to do the above mentioned tasks.
Data file handling has been effectively used in the program.
The automated cosmetic shop management system should deal with the automation of general workflow and administration process of the shop. The main processes of the system focus on customer's request where the system is able to search the most appropriate products and deliver it to the customers. It should help the employees to quickly identify the list of cosmetic product that have reached the minimum quantity and also keep a track of expired date for each cosmetic product. It should help the employees to find the rack number in which the product is placed.It is also Faster and more efficient way.
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdffxintegritypublishin
Advancements in technology unveil a myriad of electrical and electronic breakthroughs geared towards efficiently harnessing limited resources to meet human energy demands. The optimization of hybrid solar PV panels and pumped hydro energy supply systems plays a pivotal role in utilizing natural resources effectively. This initiative not only benefits humanity but also fosters environmental sustainability. The study investigated the design optimization of these hybrid systems, focusing on understanding solar radiation patterns, identifying geographical influences on solar radiation, formulating a mathematical model for system optimization, and determining the optimal configuration of PV panels and pumped hydro storage. Through a comparative analysis approach and eight weeks of data collection, the study addressed key research questions related to solar radiation patterns and optimal system design. The findings highlighted regions with heightened solar radiation levels, showcasing substantial potential for power generation and emphasizing the system's efficiency. Optimizing system design significantly boosted power generation, promoted renewable energy utilization, and enhanced energy storage capacity. The study underscored the benefits of optimizing hybrid solar PV panels and pumped hydro energy supply systems for sustainable energy usage. Optimizing the design of solar PV panels and pumped hydro energy supply systems as examined across diverse climatic conditions in a developing country, not only enhances power generation but also improves the integration of renewable energy sources and boosts energy storage capacities, particularly beneficial for less economically prosperous regions. Additionally, the study provides valuable insights for advancing energy research in economically viable areas. Recommendations included conducting site-specific assessments, utilizing advanced modeling tools, implementing regular maintenance protocols, and enhancing communication among system components.
About
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Technical Specifications
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
Key Features
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface
• Compatible with MAFI CCR system
• Copatiable with IDM8000 CCR
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
Application
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Student information management system project report ii.pdfKamal Acharya
Our project explains about the student management. This project mainly explains the various actions related to student details. This project shows some ease in adding, editing and deleting the student details. It also provides a less time consuming process for viewing, adding, editing and deleting the marks of the students.
Forklift Classes Overview by Intella PartsIntella Parts
Discover the different forklift classes and their specific applications. Learn how to choose the right forklift for your needs to ensure safety, efficiency, and compliance in your operations.
For more technical information, visit our website https://intellaparts.com
Democratizing Fuzzing at Scale by Abhishek Aryaabh.arya
Presented at NUS: Fuzzing and Software Security Summer School 2024
This keynote talks about the democratization of fuzzing at scale, highlighting the collaboration between open source communities, academia, and industry to advance the field of fuzzing. It delves into the history of fuzzing, the development of scalable fuzzing platforms, and the empowerment of community-driven research. The talk will further discuss recent advancements leveraging AI/ML and offer insights into the future evolution of the fuzzing landscape.
1. Unit V. Image Compression
Two mark Questions
1. What is the need for image compression?
In terms of storage, the capacity of a storage device can be effectively
increased with methods that compress a body of data on its way to a storage
device and decompresses it when it is retrieved. In terms of communications, the
bandwidth of a digital communication link can be effectively increased by
compressing data at the sending end and decompressing data at the receiving
end.
At any given time, the ability of the Internet to transfer data is fixed. Thus,
if data can effectively be compressed wherever possible, significant
improvements of data throughput can be achieved. Many files can be combined
into one compressed document making sending easier.
2. What is run length coding?
Run-length Encoding, or RLE is a technique used to reduce the size of a
repeating string of characters. This repeating string is called a run; typically RLE
encodes a run of symbols into two bytes, a count and a symbol. RLE can
compress any type of data regardless of its information content, but the content
of data to be compressed affects the compression ratio. Compression is normally
measured with the compression ratio.
3. What are the different compression methods?
The different compression methods are,
i. Run Length Encoding (RLE)
ii. Arithmetic coding
iii. Huffman coding and
iv. Transform coding
4. Define compression ratio.
Compression ratio is defined as the ratio of original size of the image to
compressed size of the image.
It is given as
Compression Ratio = original size / compressed size: 1
2. 5. What are the basic steps in JPEG?
The Major Steps in JPEG Coding involve:
i. DCT (Discrete Cosine Transformation)
ii. Quantization
iii. Zigzag Scan
iv. DPCM on DC component
v.RLE on AC Components
vi. Entropy Coding
6. What is coding redundancy?
If the gray level of an image is coded in a way that uses more code
words than necessary to represent each gray level, then the resulting image is
said to contain coding redundancy.
7. What is interpixel redundancy?
The value of any given pixel can be predicted from the values of its
neighbors.
The information carried by is small. Therefore the visual contribution
of a single pixel to an image is redundant. Otherwise called as spatial
redundant geometric redundant or interpixel redundant.
Eg: Run length coding
8. What is psychovisual redundancy?
In normal visual processing certain information has less importance
than other information. So this information is said to be psycho visual
redundant.
9. What is meant by fidelity criteria?
Data loss due to psychovisual redundancy coding may need to be checked.
Fidelity criteria are a measure for such loss.
•Two kinds of fidelity criteria
1) subjective and 2) objective
3. 10.What is run length coding?
Run-length Encoding, or RLE is a technique used to reduce the size of a
repeating string of characters. This repeating string is called a run; typically RLE
encodes a run of symbols into two bytes, a count and a symbol. RLE can
compress any type of data regardless of its information content, but the content
of data to be compressed affects the compression ratio. Compression is normally
measured with the compression ratio.
11.Define source encoder.
Source encoder performs three operations:
1) Mapper -this transforms the input data into non-visual format. It reduces
the interpixel redundancy.
2) Quantizer - It reduces the psycho visual redundancy of the input images.
This step is omitted if the system is error free.
3) Symbol encoder- This reduces the coding redundancy .This is the final
stage of encoding process.
12.Draw the JPEG decoder.
13.What are the types of decoder?
Source decoder- has two components
a) Symbol decoder- This performs inverse operation of symbol
encoder.
b) Inverse mapping- This performs inverse operation of mapper.
Channel decoder-this is omitted if the system is error free.
4. 14.Differentiate between lossy compression and lossless compression
methods.
Lossless compression can recover the exact original data after compression.
It is used mainly for compressing database records, spreadsheets or word
processing files, where exact replication of the original is essential.
Lossy compression will result in a certain loss of accuracy in exchange for a
substantial increase in compression. Lossy compression is more effective
when used to compress graphic images and digitized voice where losses
outside visual or aural perception can be tolerated.
15.What is meant by wavelet coding?
16.Define channel encoder.
The channel encoder reduces the impact of the channel noise by inserting
redundant bits into the source encoded data.
Eg: Hamming code
17.What is jpeg?
The acronym is expanded as "Joint Photographic Expert Group". It is an
international standard in 1992. It perfectly Works with colour and greyscale
images, Many applications e.g., satellite, medical,...
5. 18.Differentiate between jpeg and jpeg2000 standards.
jpeg
JPEG is good for photography
Compression ratios of 20:1 are easily attained
24-bits per pixel can be used leading to better accuracy
Progressive JPEG(interlacing)
jpeg2000
JPEG 2000 is an all encompassing standard
Wavelet based image compression standard
Lossless and lossy compression
Progressive transmission by pixel accuracy and resolution
Region-of-Interest Coding
Random codestream access and processing
Robustness to bit-errors
Content-based description
Side channel spatial information (transparency)
19.What are the operations performed by error free compression?
1) Devising an alternative representation of the image in which its interpixel
redundant are reduced.
2) Coding the representation to eliminate coding redundancy
20.Define Huffman coding.
Huffman coding is a popular technique for removing coding
redundancy.
When coding the symbols of an information source the Huffman code
yields the smallest possible number of code words, code symbols per
source symbol.
6. 21.What is image compression?
Image compression refers to the process of redundancy amount of data
required to represent the given quantity of information for digital image. The
basis of reduction process is removal of redundant data.
(or)
A technique used to reduce the volume of information to be
transmitted about an image
22.Define encoder.
Source encoder is responsible for removing the coding and interpixel
redundancy and psycho visual redundancy.
There are two components
A) Source Encoder
B) Channel Encoder
23.What is variable length coding?
Variable Length Coding is the simplest approach to error free
compression. It reduces only the coding redundancy. It assigns the shortest
possible codeword to the most probable gray levels.
24.Define arithmetic coding.
In arithmetic coding, one to one corresponds between source symbols and
code word doesn’t exist where as the single arithmetic code word assigned
for a sequence of source symbols. A code word defines an interval of number
between 0 and 1.
25.Draw the block diagram of transform coding system.
7. Twelve mark Questions
1. Explain various functional block of JPEG standard?
Joint Photographic Expert’s Group. International standard for
photographs. It is Lossless/lossy.
Based on the facts that :
Humans are more sensitive to lower spatial frequency
components.
A large majority of useful image contents change relatively slowly
across images.
Steps involved :
Image converted to Y,Cb,Cr format
Divided into 8x8 blocks
Each 8x8 block subject to DCT followed by quantization
Zig-zag scan
DC coefficients stored using DPCM
RLE used for AC coefficients
Huffman encoding
Frame generation
8. Functional block diagram of JPEG standard
Block preparation
Compute luminance (Y) & chrominance (I & Q) according
to the formulas:
Y = 0.3R + 0.59G + 0.11B (0 to 255)
I = 0.6R - 0.28G - 0.32B (0 to 255)
Q = 0.21R - 0.52G + 0.31B (0 to 255)
Separate matrices are constructed for Y,I,Q.
Square block of four pixels are averaged in the I & Q (lossy
and compress image by factor of 2).
128 is subtracted form Y,I and Q.
Each matrix is divided up into 8X8 blocks
Discrete cosine transformation
Output of each DCT is an 8X8 matrix.
DCT element (0,0) is the average value of the block.
Other elements are difference between original and average
value.
Theoretically lossless but sometimes it may be lossy.
9. Quantization
Less important DCT coefficients are wiped out.
It is the main lossy step involved in JPEG.
It is done by dividing each of the coefficients in the 8X8 matrix
by a weight taken from a table.
These weights are not a part of JPEG std.
Differential quantization
It reduces the(0,0) value of each block by replacing it with the
amount it differs from the corresponding element in the
previous block.
Since these elements are the average value of their respective
blocks ,they should change slowly.
Run length encoding
It linearizes the 64 elements and applies run length encoding
to the list.
10. Statistical output encoding
JPEG uses Huffman encoding for this purpose.
It often produces a 20:1 compression or better.
For decoding we have to run the algorithm backward.
JPEG is roughly symmetric: Decoding takes as long as
encoding.
Advantages and Disadvantages:-
Advantages Disadvantages
Compression ratios of 20:1 are
easily attained.
Doesn’t support transparency.
24-bits per pixel can be used
leading to better accuracy.
Doesn’t work well with sharp
edges.
Progressive JPEG(interlacing) Almost always lossy and
No target bit rate
Another Block Diagram
11. JPEG 2000 STANDARD:-
Wavelet based image compression standard
Encoding
Decompose source image into components
Decompose image and its components into rectangular tiles
Apply wavelet transform on each tile
Quantize and collect subbands of coefficients into rectangular arrays of
“code-blocks”
Encode so that certain ROI’s can be coded in a higher quality
Add markers in the bitstream to allow error resilience
Advantages:
Lossless and lossy compression.
Progressive transmission by pixel accuracy and resolution.
Region-of-Interest Coding.
Random codestream access and processing.
Robustness to bit-errors.
Content-based description.
Side channel spatial information (transparency).
12. 2. Explain (i) one-dimensional run length coding (ii) two-dimensional run
length coding.
RLE stands for Run Length Encoding. It is a lossless algorithm that only
offers decent compression ratios in specific types of data.
• Pre-processing method, good when one symbol occurs with high
probability or when symbols are dependent
• Count how many repeated symbol occur
• Source ’symbol’ = length of run
(i) one-dimensional run length coding
• Used for binary images
• Length of the sequences of “ones” & “zeroes” are detected.
• Assume that each row begins with a white(1) run.
• Additional compression is achieved by variable length-coding
(Huffman coding) the run-lengths.
An m-bit gray scale image can be converted into m binary images
by bit-plane slicing. These individual images are then encoded
using run-length coding.
However, a small difference in the gray level of adjacent pixels can
cause a disruption of the run of zeroes or ones.
Example: Let us say one pixel has a gray level of 127 and the next
pixel has a gray level of 128.
In binary: 127 = 01111111 & 128 = 10000000
Therefore a small change in gray level has decreased the run-
lengths in all the bit-planes.
13. (ii) two-dimensional run length coding.
Developed in 1950s and has become, along with its 2-D
extensions, the standard approach in facsimile (FAX) coding.
Two dimensional array of pixel values
Spatial redundancy and temporal redundancy
Human eye is less sensitive to chrominance signal than to
luminance signal (U and V can be coarsely coded)
Human eye is less sensitive to the higher spatial frequency
components
Human eye is less sensitive to quantizing distortion at high
luminance levels
Source image as 2-D matrix of pixel values
R, G, B format requires three matrices, one each for R, G, B
quantized values
In Y, U, V representation, the U and V matrices can be half as
small as the Y matrix
Source image matrix is divided into blocks of 8X8 submatrices
Smaller block size helps DCT computation and individual blocks
are sequentially fed to the DCT which transforms each block
separately
Advantages and disadvantages
This algorithm is very easy to implement and does not require much CPU
horsepower. RLE compression is only efficient with files that contain lots of
repetitive data. These can be text files if they contain lots of spaces for indenting
but line-art images that contain large white or black areas are far more suitable.
Computer generated colour images (e.g. architectural drawings) can also give
fair compression ratios.
14. 3. Explain variable length coding and Huffman coding.
Variable length coding:
Assigning fewer bits to the more probable gray levels than to the less
probable ones achieves data compression. This is called variable length
coding.
Variable length code whose length is inversely proportional to that
character’s frequency.
Must satisfy non-prefix property to be uniquely decodable.
two pass algorithm
First pass accumulates the character frequency and generate
codebook.
Second pass does compression with the codebook.
Huffman codes require an enormous number of computations. For N
source symbols, N-2 source reductions (sorting operations) and N-2
code assignments must be made. Sometimes we sacrifice coding
efficiency for reducing the number of computations.
Create codes by constructing a binary tree
1. Consider all characters as free nodes
2. Assign two free nodes with lowest frequency to a parent node
with weights equal to sum of their frequencies
3. Remove the two free nodes and add the newly created parent
node to the list of free nodes
4. Repeat step2 and 3 until there is one free node left. It becomes
the root of tree
16. Huffman Coding
This coding reduces average number of bits/pixel.
It assigns variable length bits to different symbols.
Achieves compression in 2 steps.
Source reduction
Code assignment
Steps
1. Find the gray level probabilities from the image histogram.
2. Arrange probabilities in reverse order, highest at top.
3. Combine the smallest two by addition, always keep sum in reverse
order.
4. Repeat step 3 until only two probabilities are left.
5. By working backward along the tree, generate code by alternating
assignment of 0 & 1.
Fig: Huffman Source Reductions
Fig : Huffman code assignment procedure
18. 4. Explain arithmetic coding and LZW coding.
Arithmetic coding
Arithmetic compression is an alternative to Huffman compression, it
enables characters to be represented as fractional bit lengths. Unlike
for Huffman compression, where fractional code lengths are not
possible and the allocation of shorter codewords for more frequently
occurring characters needs at least one-bit codeword no matter how
high its frequency.
Arithmetic coding works by representing a number by an interval of
real numbers greater or equal to zero, but less than one. As a message
becomes longer, the interval needed to represent it becomes smaller
and smaller, and the number of bits needed to specify it increases.
Entire sequence of source symbol (message) is assigned a single
arithmetic code word.
There is no one to one coding like Huffman
The code word is within interval [0, 1]
As the number of symbols in the message increases, the interval used to
represent it becomes smaller and the number of information units (bits)
required to represent the interval becomes larger
Ex. More bits are required to represent 0.003 than 0.1
Steps: Arithmetic Coding
The basic algorithm for encoding a file using arithmetic coding works
conceptually as follows:
(1) Begin with current range [L,H) initialized to [0,1).
Note : We denote brackets [0,1) in such a way to show that it is equal to
or greater than 0 but less than 1.
(2) For each symbol of the file, we perform two steps :
a) Subdivide the current interval into subintervals, one for each
alphabet symbol. b) Select the subinterval corresponding to the
symbol that actually occurs next in the file and make it the new
current interval.
(3) Output enough bits to distinguish the current interval from all
other possible interval.
19. Example: Encode the message: a1 a2 a3 a4
Table : Arithmetic Coding example
Fig : Arithmetic coding procedure
So, any number in the interval [0.06752, 0.0688) , for example 0.068 can be
used to represent the message.
Here 3 decimal digits are used to represent the 5 symbol source message.
This translates into 3/5 or 0.6 decimal digits per source symbol and
compares favorably with the entropy of
-(3x0.2log100.2+0.4log100.4) = 0.5786 digits per symbol
As the length of the sequence increases, the resulting arithmetic code
approaches the bound set by entropy.
In practice, the length fails to reach the lower bound, because:
• The addition of the end of message indicator that is needed to separate one
message from another
• The use of finite precision arithmetic
20. LZW (Lempel-Ziv-Welch) coding
LZW (Lempel-Ziv-Welch) coding, assigns fixed-length code words
to variable length sequences of source symbols, but requires no a
priori knowledge of the probability of the source symbols. LZW was
formulated in 1984
The nth extension of a source can be coded with fewer average bits
per symbol than the original source.
LZW is used in:
• Tagged Image file format (TIFF)
• Graphic interchange format (GIF)
Portable document format (PDF)
The Algorithm:
• A codebook or “dictionary” containing the source symbols is
constructed.
• For 8-bit monochrome images, the first 256 words of the dictionary are
assigned to the gray levels 0-255
• Remaining part of the dictionary is filled with sequences of the gray
levels
21. Example:
39 39 126 126
39 39 126 126
39 39 126 126
39 39 126 126
Table : LZW Coding example
Compression ratio = (8 x 16) / (10 x 9 ) = 64 / 45 = 1.4
Important features of LZW:
• The dictionary is created while the data are being encoded. So encoding
can be done “on the fly”
• The dictionary need not be transmitted. Dictionary can be built up at
receiving end “on the fly”
• If the dictionary “overflows” then we have to reinitialize the dictionary
and add a bit to each one of the code words.
• Choosing a large dictionary size avoids overflow, but spoils
compressions
22. Decoding LZW:
Let the bit stream received be:
39 39 126 126 256 258 260 259 257 126
In LZW, the dictionary which was used for encoding need not be sent with
the image. A separate dictionary is built by the decoder, on the “fly”, as it
reads the received code words.
Recognized Encoded
value
pixels Dic.
address
Dic.
Entry
39 39
39 39 39 256 39-39
39 126 126 257 39-126
126 126 126 258 126-126
126 256 39-39 259 126-39
256 258 126-126 260
39-39-
126
258 260
39-39-
126
261
126-126-
39
260 259 126-39 262
39-39-
126-126
259 257 39-126 263
126-39-
39
257 126 126 264
39-126-
126
23. 5. Explain wavelet based image compression.
In contrast to image compression using discrete cosine transform (DCT)
which is proved to be poor in frequency localization due to the
inadequate basis window, discrete wavelet transform (DWT) has a
better way to resolve the problem by trading off spatial or time
resolution for frequency resolution.
Exploiting the structures between coefficients for removing redundancy
Wavelet Coding
Fig : Wavelet coding system ( encoder)
Fig : Wavelet coding system ( decoder)
Advantages:
Lossless and lossy compression.
Progressive transmission by pixel accuracy and resolution.
Region-of-Interest Coding.
Random code stream access and processing.
Robustness to bit-errors.
Content-based description.
Side channel spatial information (transparency).
24. 6. Explain arithmetic coding and Huffman coding.
Arithmetic coding
Arithmetic compression is an alternative to Huffman compression, it
enables characters to be represented as fractional bit lengths. Unlike
for Huffman compression, where fractional code lengths are not
possible and the allocation of shorter code words for more frequently
occurring characters needs at least one-bit codeword no matter how
high its frequency.
Arithmetic coding works by representing a number by an interval of
real numbers greater or equal to zero, but less than one. As a message
becomes longer, the interval needed to represent it becomes smaller
and smaller, and the number of bits needed to specify it increases.
Entire sequence of source symbol (message) is assigned a single
arithmetic code word.
There is no one to one coding like Huffman
The code word is within interval [0, 1]
As the number of symbols in the message increases, the interval used to
represent it becomes smaller and the number of information units (bits)
required to represent the interval becomes larger
Ex. More bits are required to represent 0.003 than 0.1
Steps: Arithmetic Coding
The basic algorithm for encoding a file using arithmetic coding works
conceptually as follows:
(1) Begin with current range [L,H) initialized to [0,1).
Note : We denote brackets [0,1) in such a way to show that it is equal to
or greater than 0 but less than 1.
(2) For each symbol of the file, we perform two steps :
a) Subdivide the current interval into subintervals, one for each
alphabet symbol. b) Select the subinterval corresponding to the
symbol that actually occurs next in the file and make it the new
current interval.
(3) Output enough bits to distinguish the current interval from all
other possible interval.
25. Example:
Encode the message: a1 a2 a3 a4
Fig : Arithmetic Coding example
Fig : Arithmetic coding procedure
Huffman Coding
This coding reduces average number of bits/pixel.
It assigns variable length bits to different symbols.
Achieves compression in 2 steps.
Source reduction
Code assignment
26. Steps
6. Find the gray level probabilities from the image histogram.
7. Arrange probabilities in reverse order, highest at top.
8. Combine the smallest two by addition, always keep sum in reverse
order.
9. Repeat step 3 until only two probabilities are left.
10.By working backward along the tree, generate code by alternating
assignment of 0 & 1.
Fig : Huffman Source Reductions
Fig : Huffman code assignment procedure
27. 7. Explain how compression is achieved in transform coding and explain
the DCT.
Transform Coding
Three steps:
Divide a data sequence into blocks of size N and transform each
block using a reversible mapping
Quantize the transformed sequence
Encode the quantized values
Benefits
- transform co efficiently, relatively uncorrelated
- energy is highly compacted
- reasonable robust relative to channel errors.
28. • DCT is similar to DFT, but can provide a better approximation with
fewer coefficients
• The coefficients of DCT are real valued instead of complex valued in
DFT.
The discrete cosine transform (DCT) is the basis for many image
compression algorithms. One clear advantage of the DCT over the
DFT is that there is no need to manipulate complex numbers.
The equation for a forward DCT is
and for the reverse DCT
Where,
DCT in Terms of Basis Functions
The basis functions or basis images for DCT is given by:
Where,
1 0
2 1,2,..., 1
u for u
N
u for u N
N
2 1 2 1
, , , cos cos
2 2
x u y v
g x y u v u v
N N
29. N is the block size of the image (normally N=8)
Matrix of Discrete Cosine Transform (DCT)
Zig-zag Scan DCT Blocks
• To group low frequency coefficients in top of vector.
• Maps 8 x 8 to a 1 x 64 vector.
30. 8. Explain any two basic data redundancies in digital image compression.
Data Redundancy
Various amount of data may be used to represent the same information.
Data which either do not provide necessary information or provide the
same information again are called redundant data.
Removing redundant data from the image reduces the size.
Redundancies In Image
In image compression 3 basic data redundancies can be identified.
1. Coding redundancy (CR)
2. Interpixel redundancy (IR)
3. Psychovisual redundancy (PVR)
Data compression is achieved when one or more of these redundancies
are reduced or eliminated
Coding redundancy
A natural m-bit coding method assigns m-bit to each gray level
without considering the probability that gray level occurs with: Very
likely to contain coding redundancy
Basic concept:
Utilize the probability of occurrence of each gray level (histogram) to
determine length of code representing that particular gray level:
variable-length coding.
31. Assign shorter code words to the gray levels that occur most frequently
or vice versa.
Fig : Graphical representation of fundamental basis of data compression
32. Interpixel Redundancy
Caused by High Interpixel Correlations within an image, i.e., gray level
of any given pixel can be reasonably predicted from the value of its
neighbors (information carried by individual pixels is relatively small)
spatial redundancy, geometric redundancy, interframe redundancy (in
general, interpixel redundancy )
Interpixel redundancy occurs because adjacent pixels tend to be highly
correlated.
Adjacent pixel values tend to be close to each other.
The value of a given pixel can be predicated from the value of
its neighbors.
Visual contribution of a single pixel to an image is redundant.
To reduce inter pixel redundancy image is transformed in to
more efficient format.
For Ex. Difference between adjacent pixels can be used to store
an image.
This transformation process is called mapping
Reverse of that is called inverse mapping
We can detect the presence of correlation between pixels (or
interpixel redundancy) by computing the auto-correlation coefficients
along a row of pixels
( )( )
(0)
A nn
A
11( ) ( , ) ( , )
0
where
N n
A n f x y f x y n
N n y
Maximum possible value of γ(∆n) is 1 and this value is approached for this
image, both for adjacent pixels and also for pixels which are separated by 45
pixels (or multiples of 45).
33. Psychovisual Redundancy
Psychovisual redundancy refers to the fact that some information is
more important to the human visual system than other types of
information.
Use of less no. of gray levels reduces the size of image.
Elimination of psychovisually redundant data from an image results in
a loss of quantitative information.
This process is not reversible
The key in image compression algorithm development is to determine
the minimal data required to retain the necessary information.
This is achieve by taking advantage of the redundancy that exists in the
image.
Any redundant information that is not required can be
eliminated to reduce the amount of data used to represent the
image
The eye does not respond with equal sensitivity to all visual information.
Certain information has less relative importance than other information in
normal visual processing psychovisually redundant (which can be eliminated
without significantly impairing the quality of image perception).
The elimination of psychovisually redundant data results in a loss of
quantitative information lossy data compression method.
Image compression methods based on the elimination of psychovisually
redundant data (usually called quantization) are usually applied to commercial
broadcast TV and similar applications for human visualization.
34. 9. Explain Huffman coding Algorithm giving a numerical example?
Huffman Coding
This coding reduces average number of bits/pixel.
It assigns variable length bits to different symbols.
Achieves compression in 2 steps.
Source reduction
Code assignment
Steps
1. Find the gray level probabilities from the image histogram.
2. Arrange probabilities in reverse order, highest at top.
3. Combine the smallest two by addition, always keep sum in reverse
order.
4. Repeat step 3 until only two probabilities are left.
5. By working backward along the tree, generate code by alternating
assignment of 0 & 1.
Fig : Huffman Source Reductions
35. Fig : Huffman code assignment procedure
Calculating Lavg & Entropy
Lavg= 2.2 bits/pixel
Entropy = 2.14 bits/pixel
Efficiency of Huffman code = 2.14/2.2 = 0.973
Constraint : symbol be coded one at a time
Uniquely code able & decodable
37. 10.Explain the Constrained Least Square filtering?
Constrained Least Squares Filtering
Only the mean and variance of the noise is required
The degradation model in vector-matrix form
1 1 1MN MNMN MN MN
g H f η
The objective function
21 1 2min [ ( , )]
0 0
2 2
M N
C f x y
x y
subject to
g Hf η
The solution
*( , )ˆ( , ) ( , )
2
( , ) ( , )
H u vF u v G u v
H u v P u v
0 1 0
( , ) 1 4 1
0 1 0
p x y
In that case we seek for a solution that minimizes the function
( )M
2
f y Hf
A necessary condition for )(fM to have a minimum is that its gradient
with respect to f is equal to zero. This gradient is given below
( ) ( ) 2(M M
f T Tf H y H Hf)
ff
38. And by using the steepest descent type of optimization we can formulate
an iterative rule as follows:
Tf H y
0
( )
( ) ( )
M
f T T Tkf f f H y Hf H y I H H f
k k k kf
k
k 1
Constrained least squares iteration
In this method we attempt to solve the problem of constrained
restoration iteratively. As already mentioned the following functional is
minimized
2 2
( , )M f y Hf Cf
The necessary condition for a minimum is that the gradient of ),( fM is
equal to zero. That gradient is
( ) ( , ) 2[( ) ]M T T Tf f H H C C f H y
f
The initial estimate and the updating rule for obtaining the restored
image are now given by
Tf H y
0
[ ( ) ]
T T Tf f H y H H C C f
k kk 1
It can be proved that the above iteration (known as Iterative CLS or
Tikhonov-Miller Method) converges if
20
max
where max is the maximum eigenvalue of the matrix,
( )T TH H C C
If the matrices H and C are block-circulant the iteration can be
implemented in the frequency domain.