This document discusses image compression using the discrete cosine transform (DCT). It begins by explaining the 1D DCT and how it converts a signal into elementary frequency components. It then shows how to compute the 1D and 2D DCT using Mathematica functions. The 2D DCT is computed by applying the 1D DCT to rows and then columns of an image. Examples compress a test image and recover it to demonstrate the process works as expected.
Discrete cosine transform (DCT) is a widely used tool in image and video compression applications. Recently, the high-throughput DCT designs have been adopted to fit the requirements of real-time application.
Operating the shifting and addition in parallel, an error-compensated adder-tree (ECAT) is proposed to deal with the truncation errors and to achieve low-error and high-throughput discrete cosine transform (DCT) design. Instead of the 12 bits used in previous works, 9-bit distributed arithmetic. DA-based DCT core with an error-compensated adder-tree (ECAT). The proposed ECAT operates shifting and addition in parallel by unrolling all the words required to be computed. Furthermore, the error-compensated circuit alleviates the truncation error for high accuracy design. Based on low-error ECAT, the DA-precision in this work is chosen to be 9 bits instead of the traditional 12 bits. Therefore, the hardware cost is reduced, and the speed is improved using the proposed ECAT.
Discrete cosine transform (DCT) is a widely used tool in image and video compression applications. Recently, the high-throughput DCT designs have been adopted to fit the requirements of real-time application.
Operating the shifting and addition in parallel, an error-compensated adder-tree (ECAT) is proposed to deal with the truncation errors and to achieve low-error and high-throughput discrete cosine transform (DCT) design. Instead of the 12 bits used in previous works, 9-bit distributed arithmetic. DA-based DCT core with an error-compensated adder-tree (ECAT). The proposed ECAT operates shifting and addition in parallel by unrolling all the words required to be computed. Furthermore, the error-compensated circuit alleviates the truncation error for high accuracy design. Based on low-error ECAT, the DA-precision in this work is chosen to be 9 bits instead of the traditional 12 bits. Therefore, the hardware cost is reduced, and the speed is improved using the proposed ECAT.
Lifting Scheme Cores for Wavelet TransformDavid Bařina
The thesis focuses on efficient computation of the two-dimensional discrete wavelet transform. The state-of-the-art methods are extended in several ways to perform the transform in a single loop, possibly in multi-scale fashion, using a compact streaming core. This core can further be appropriately reorganized to target the minimization of certain platform resources. The approach presented here nicely fits into common SIMD extensions, exploits the cache hierarchy of modern general-purpose processors, and is suitable for parallel evaluation. Finally, the approach presented is incorporated into the JPEG 2000 compression chain, in which it has proved to be fundamentally faster than widely used implementations.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Satellite Image Resolution Enhancement Technique Using DWT and IWTEditor IJCATR
Now a days satellite images are widely used In many applications such as astronomy and
geographical information systems and geosciences studies .In this paper, We propose a new satellite image
resolution enhancement technique which generates sharper high resolution image .Based on the high
frequency sub-bands obtained from the dwt and iwt. We are not considering the LL sub-band here. In this
resolution-enhancement technique using interpolated DWT and IWT high-frequency sub band images and the
input low-resolution image. Inverse DWT (IDWT) has been applied to combine all these images to generate
the final resolution-enhanced image. The proposed technique has been tested on satellite bench mark images.
The quantitative (peak signal to noise ratio and mean square error) and visual results show the superiority of
the proposed technique over the conventional method and standard image enhancement technique WZP.
Image compression using Hybrid wavelet Transform and their Performance Compa...IJMER
Images may be worth a thousand words, but they generally occupy much more space in hard disk, or
bandwidth in a transmission system, than their proverbial counterpart. Compressing an image is significantly
different than compressing raw binary data. Of course, general purpose compression programs can be used to
compress images, but the result is less than optimal. This is because images have certain statistical properties
which can be exploited by encoders specifically designed for them. Also, some of the finer details in the image
can be sacrificed for the sake of saving a little more bandwidth or storage space. Compression is the process of
representing information in a compact form. Compression is a necessary and essential method for creating
image files with manageable and transmittable sizes. The data compression schemes can be divided into
lossless and lossy compression. In lossless compression, reconstructed image is exactly same as compressed
image. In lossy image compression, high compression ratio is achieved at the cost of some error in reconstructed
image. Lossy compression generally provides much higher compression than lossless compression.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
We would send hard copy of Journal by speed post to the address of correspondence author after online publication of paper.
We will dispatched hard copy to the author within 7 days of date of publication
Labview with dwt for denoising the blurred biometric imagesijcsa
In this paper, biometric blurred image (fingerprint) denoising are presented and investigated by using
LabVIEW applications , It is blurred and corrupted with Gaussian noise. This work is proposed
algorithm that has used a discrete wavelet transform (DWT) to divide the image into two parts, this will
be increasing the manipulation speed of biometric images that are of the big sizes. This work has included
two tasks ; the first designs the LabVIEW system to calculate and present the approximation coefficients,
by which the image's blur factor reduced to minimum value according to the proposed algorithm. The
second task removes the image's noise by calculated the regression coefficients according to Bayesian-
Shrinkage estimation method.
Comparison between JPEG(DCT) and JPEG 2000(DWT) compression standardsRishab2612
This topic comes under the Image Processing.In this comparison between JPEG and JPEG 2000 compression standard techniques is made.The PPT comprises of results, analysis and conclusion along with the relevant outputs
An Efficient Multiplierless Transform algorithm for Video CodingCSCJournals
This paper presents an efficient algorithm to accelerate software video encoders/decoders by reducing the number of arithmetic operations for Discrete Cosine Transform (DCT). A multiplierless Ramanujan Ordered Number DCT (RDCT) is presented which computes the coefficients using shifts and addition operations only. The reduction in computational complexity has improved the performance of the video codec by almost 58% compared with the commonly used integer DCT. The results show that significant computation reduction can be achieved with negligible average peak signal-to-noise ratio (PSNR) degradation. The average structural similarity index matrix (SSIM) also ensures that the degradation due to the approximation is minimal.
Performance Evaluation of Quarter Shift Dual Tree Complex Wavelet Transform B...IJECEIAES
In this paper, multifocus image fusion using quarter shift dual tree complex wavelet transform is proposed. Multifocus image fusion is a technique that combines the partially focused regions of multiple images of the same scene into a fully focused fused image. Directional selectivity and shift invariance properties are essential to produce a high quality fused image. However conventional wavelet based fusion algorithms introduce the ringing artifacts into fused image due to lack of shift invariance and poor directionality. The quarter shift dual tree complex wavelet transform has proven to be an effective multi-resolution transform for image fusion with its directional and shift invariant properties. Experimentation with this transform led to the conclusion that the proposed method not only produce sharp details (focused regions) in fused image due to its good directionality but also removes artifacts with its shift invariance in order to get high quality fused image. Proposed method performance is compared with traditional fusion methods in terms of objective measures.
Lifting Scheme Cores for Wavelet TransformDavid Bařina
The thesis focuses on efficient computation of the two-dimensional discrete wavelet transform. The state-of-the-art methods are extended in several ways to perform the transform in a single loop, possibly in multi-scale fashion, using a compact streaming core. This core can further be appropriately reorganized to target the minimization of certain platform resources. The approach presented here nicely fits into common SIMD extensions, exploits the cache hierarchy of modern general-purpose processors, and is suitable for parallel evaluation. Finally, the approach presented is incorporated into the JPEG 2000 compression chain, in which it has proved to be fundamentally faster than widely used implementations.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Satellite Image Resolution Enhancement Technique Using DWT and IWTEditor IJCATR
Now a days satellite images are widely used In many applications such as astronomy and
geographical information systems and geosciences studies .In this paper, We propose a new satellite image
resolution enhancement technique which generates sharper high resolution image .Based on the high
frequency sub-bands obtained from the dwt and iwt. We are not considering the LL sub-band here. In this
resolution-enhancement technique using interpolated DWT and IWT high-frequency sub band images and the
input low-resolution image. Inverse DWT (IDWT) has been applied to combine all these images to generate
the final resolution-enhanced image. The proposed technique has been tested on satellite bench mark images.
The quantitative (peak signal to noise ratio and mean square error) and visual results show the superiority of
the proposed technique over the conventional method and standard image enhancement technique WZP.
Image compression using Hybrid wavelet Transform and their Performance Compa...IJMER
Images may be worth a thousand words, but they generally occupy much more space in hard disk, or
bandwidth in a transmission system, than their proverbial counterpart. Compressing an image is significantly
different than compressing raw binary data. Of course, general purpose compression programs can be used to
compress images, but the result is less than optimal. This is because images have certain statistical properties
which can be exploited by encoders specifically designed for them. Also, some of the finer details in the image
can be sacrificed for the sake of saving a little more bandwidth or storage space. Compression is the process of
representing information in a compact form. Compression is a necessary and essential method for creating
image files with manageable and transmittable sizes. The data compression schemes can be divided into
lossless and lossy compression. In lossless compression, reconstructed image is exactly same as compressed
image. In lossy image compression, high compression ratio is achieved at the cost of some error in reconstructed
image. Lossy compression generally provides much higher compression than lossless compression.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
We would send hard copy of Journal by speed post to the address of correspondence author after online publication of paper.
We will dispatched hard copy to the author within 7 days of date of publication
Labview with dwt for denoising the blurred biometric imagesijcsa
In this paper, biometric blurred image (fingerprint) denoising are presented and investigated by using
LabVIEW applications , It is blurred and corrupted with Gaussian noise. This work is proposed
algorithm that has used a discrete wavelet transform (DWT) to divide the image into two parts, this will
be increasing the manipulation speed of biometric images that are of the big sizes. This work has included
two tasks ; the first designs the LabVIEW system to calculate and present the approximation coefficients,
by which the image's blur factor reduced to minimum value according to the proposed algorithm. The
second task removes the image's noise by calculated the regression coefficients according to Bayesian-
Shrinkage estimation method.
Comparison between JPEG(DCT) and JPEG 2000(DWT) compression standardsRishab2612
This topic comes under the Image Processing.In this comparison between JPEG and JPEG 2000 compression standard techniques is made.The PPT comprises of results, analysis and conclusion along with the relevant outputs
An Efficient Multiplierless Transform algorithm for Video CodingCSCJournals
This paper presents an efficient algorithm to accelerate software video encoders/decoders by reducing the number of arithmetic operations for Discrete Cosine Transform (DCT). A multiplierless Ramanujan Ordered Number DCT (RDCT) is presented which computes the coefficients using shifts and addition operations only. The reduction in computational complexity has improved the performance of the video codec by almost 58% compared with the commonly used integer DCT. The results show that significant computation reduction can be achieved with negligible average peak signal-to-noise ratio (PSNR) degradation. The average structural similarity index matrix (SSIM) also ensures that the degradation due to the approximation is minimal.
Performance Evaluation of Quarter Shift Dual Tree Complex Wavelet Transform B...IJECEIAES
In this paper, multifocus image fusion using quarter shift dual tree complex wavelet transform is proposed. Multifocus image fusion is a technique that combines the partially focused regions of multiple images of the same scene into a fully focused fused image. Directional selectivity and shift invariance properties are essential to produce a high quality fused image. However conventional wavelet based fusion algorithms introduce the ringing artifacts into fused image due to lack of shift invariance and poor directionality. The quarter shift dual tree complex wavelet transform has proven to be an effective multi-resolution transform for image fusion with its directional and shift invariant properties. Experimentation with this transform led to the conclusion that the proposed method not only produce sharp details (focused regions) in fused image due to its good directionality but also removes artifacts with its shift invariance in order to get high quality fused image. Proposed method performance is compared with traditional fusion methods in terms of objective measures.
Computing f-Divergences and Distances of\\ High-Dimensional Probability Densi...Alexander Litvinenko
Talk presented on SIAM IS 2022 conference.
Very often, in the course of uncertainty quantification tasks or
data analysis, one has to deal with high-dimensional random variables (RVs)
(with values in $\Rd$). Just like any other RV,
a high-dimensional RV can be described by its probability density (\pdf) and/or
by the corresponding probability characteristic functions (\pcf),
or a more general representation as
a function of other, known, random variables.
Here the interest is mainly to compute characterisations like the entropy, the Kullback-Leibler, or more general
$f$-divergences. These are all computed from the \pdf, which is often not available directly,
and it is a computational challenge to even represent it in a numerically
feasible fashion in case the dimension $d$ is even moderately large. It
is an even stronger numerical challenge to then actually compute said characterisations
in the high-dimensional case.
In this regard, in order to achieve a computationally feasible task, we propose
to approximate density by a low-rank tensor.
Principal Components Analysis, Calculation and VisualizationMarjan Sterjev
The article explains dimension reduction principles, PCA algorithm and mathematics behind. The PCA calculation and data projection is demonstrated in R, Python and Apache Spark. Finally the results are visualized with D3.js.
This paper studies an approximate dynamic programming (ADP) strategy of a group of nonlinear switched systems, where the external disturbances are considered. The neural network (NN) technique is regarded to estimate the unknown part of actor as well as critic to deal with the corresponding nominal system. The training technique is simul-taneously carried out based on the solution of minimizing the square error Hamilton function. The closed system’s tracking error is analyzed to converge to an attraction region of origin point with the uniformly ultimately bounded (UUB) description. The simulation results are implemented to determine the effectiveness of the ADP based controller.
Sparse data formats and efficient numerical methods for uncertainties in nume...Alexander Litvinenko
Description of methodologies and overview of numerical methods, which we used for modeling and quantification of uncertainties in numerical aerodynamics
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
The Art of the Pitch: WordPress Relationships and Sales
DCT
1. Mathematica Journal, 4(1), 1994, p. 81-88
Image Compression Using the Discrete
Cosine Transform
Andrew B. Watson
NASA Ames Research Center
Abstract
The discrete cosine transform (DCT) is a technique for converting a signal into elementary
frequency components. It is widely used in image compression. Here we develop some simple
functions to compute the DCT and to compress images.
These functions illustrate the power of Mathematica in the prototyping of image processing
algorithms.
The rapid growth of digital imaging applications, including desktop publishing, multimedia,
teleconferencing, and high-definition television (HDTV) has increased the need for effective and
standardized image compression techniques. Among the emerging standards are JPEG, for
compression of still images [Wallace 1991]; MPEG, for compression of motion video [Puri
1992]; and CCITT H.261 (also known as Px64), for compression of video telephony and
teleconferencing.
All three of these standards employ a basic technique known as the discrete cosine transform
(DCT). Developed by Ahmed, Natarajan, and Rao [1974], the DCT is a close relative of the
discrete Fourier transform (DFT). Its application to image compression was pioneered by Chen
and Pratt [1984]. In this article, I will develop some simple functions to compute the DCT and
show how it is used for image compression. We have used these functions in our laboratory to
explore methods of optimizing image compression for the human viewer, using information
about the human visual system [Watson 1993]. The goal of this paper is to illustrate the use of
Mathematica in image processing and to provide the reader with the basic tools for further
exploration of this subject.
The One-Dimensional Discrete Cosine Transform
The discrete cosine transform of a list of n real numbers s(x), x = 0, ..., n-1, is the list of length n
given by:
2. Mathematica Journal, 4(1), 1994, p. 81-88
s( x) cos (
2x + 1) uπ
ε
n−1
S(u) = 2 n C(u) u = 0,...,n
x= 0 2n
where C(u) = 2−1 2 for u = 0
=1 otherwise
Each element of the transformed list S(u) is the inner (dot) product of the input list s(x) and a
basis vector. The constant factors are chosen so that the basis vectors are orthogonal and
normalized. The eight basis vectors for n = 8 are shown in Figure 1. The DCT can be written as
the product of a vector (the input list) and the n x n orthogonal matrix whose rows are the basis
vectors. This matrix, for n = 8, can be computed as follows:
DCTMatrix =
Table[ If[ k==0,
Sqrt[1/8],
Sqrt[2/8] Cos[Pi (2j + 1) k/16] ],
{k, 0, 7}, {j, 0, 7}] // N;
We can check that the matrix is orthogonal:
DCTMatrix . Transpose[DCTMatrix] // Chop // MatrixForm
1. 0 0 0 0 0 0 0
0 1. 0 0 0 0 0 0
0 0 1. 0 0 0 0 0
0 0 0 1. 0 0 0 0
0 0 0 0 1. 0 0 0
0 0 0 0 0 1. 0 0
0 0 0 0 0 0 1. 0
0 0 0 0 0 0 0 1.
Each basis vector corresponds to a sinusoid of a certain frequency:
3. Mathematica Journal, 4(1), 1994, p. 81-88
Show[GraphicsArray[Partition[
ListPlot[#, PlotRange -> {-.5, .5}, PlotJoined -> True,
DisplayFunction -> Identity]&
/@ DCTMatrix, 2] ]];
0.4 0.4
0.2 0.2
-0.2 2 3 4 5 6 7 8 -0.2 2 3 4 5 6 7 8
-0.4 -0.4
0.4 0.4
0.2 0.2
-0.2 2 3 4 5 6 7 8 -0.2 2 3 4 5 6 7 8
-0.4 -0.4
0.4 0.4
0.2 0.2
-0.2 2 3 4 5 6 7 8 -0.2 2 3 4 5 6 7 8
-0.4 -0.4
0.4 0.4
0.2 0.2
-0.2 2 3 4 5 6 7 8 -0.2 2 3 4 5 6 7 8
-0.4 -0.4
Figure 1. The eight basis vectors for the discrete cosine transform of length eight.
The list s(x) can be recovered from its transform S(u) by applying the inverse cosine transform
(IDCT):
ε C(u) S( u) cos (2x +1) uπ
n−1
s( x) = 2n x = 0,...,n
u=0 2n
where C(u) = 2−1 2 for u = 0
=1 otherwise
This equation expresses s as a linear combination of the basis vectors. The coefficients are the
elements of the transform S, which may be regarded as reflecting the amount of each frequency
present in the input s.
We generate a list of random numbers to serve as a test input:
input1 = Table[Random[Real, {-1, 1}], {8}]
{0.203056, 0.980407, 0.35312, -0.106651, 0.0399382, 0.871475,
-0.648355, 0.501067}
The DCT is computed by matrix multiplication:
4. Mathematica Journal, 4(1), 1994, p. 81-88
output1 = DCTMatrix . input1
{0.775716, 0.3727, 0.185299, 0.0121461, -0.325, -0.993021,
0.559794,
-0.625127}
As noted above, the DCT is closely related to the discrete Fourier transform (DFT). In fact, it is
possible to compute the DCT via the DFT (see [Jain 1989, p. 152]): First create a new list by
extracting the even elements, followed by the reversed odd elements. Then multiply the DFT of
this re-ordered list by so-called "twiddle factors" and take the real part. We can carry out this
process for n = 8 using Mathematica's DFT function.
DCTTwiddleFactors = N @ Join[{1},
Table[Sqrt[2] Exp[-I Pi k /16], {k, 7}]]
{1., 1.38704 - 0.275899 I, 1.30656 - 0.541196 I, 1.17588 -
0.785695 I,
1. - 1. I, 0.785695 - 1.17588 I, 0.541196 - 1.30656 I,
0.275899 - 1.38704 I}
The function to compute the DCT of a list of length n = 8 is then:
DCT[list_] := Re[ DCTTwiddleFactors *
InverseFourier[N[list[[{1, 3, 5, 7, 8, 6, 4, 2}]]]]]
Note that we use the function InverseFourier to implement what is usually in engineering
called the forward DFT. Likewise, we use Fourier to implement what is usually called the
inverse DFT. The function N is used to convert integers to reals because (in Version 2.2)
Fourier and InverseFourier are not evaluated numerically when their arguments are all
integers. The special case of a list of zeros needs to be handled separately by overloading the
functions, since N of the integer 0 is an integer and not a real. [See In[] and Out[], page ?? --ed.]
Unprotect[Fourier, InverseFourier];
Fourier[x:{0 ..}]:= x;
InverseFourier[x:{0 ..}]:= x;
Protect[Fourier, InverseFourier];
We apply DCT to our test input and compare it to the earlier result computed by matrix
multiplication. To compare the results, we subtract them and apply the Chop function to suppress
values very close to zero:
DCT[input1]
{0.775716, 0.3727, 0.185299, 0.0121461, -0.325, -0.993021,
0.559794,
-0.625127}
% - output1 // Chop
{0, 0, 0, 0, 0, 0, 0, 0}
5. Mathematica Journal, 4(1), 1994, p. 81-88
The inverse DCT can be computed by multiplication with the inverse of the DCT matrix. We
illustrate this with our previous example:
Inverse[DCTMatrix] . output1
{0.203056, 0.980407, 0.35312, -0.106651, 0.0399382, 0.871475,
-0.648355, 0.501067}
% - input1 // Chop
{0, 0, 0, 0, 0, 0, 0, 0}
As you might expect, the IDCT can also be computed via the inverse DFT. The "twiddle factors"
are the complex conjugates of the DCT factors and the reordering is applied at the end rather than
the beginning:
IDCTTwiddleFactors = Conjugate[DCTTwiddleFactors]
{1., 1.38704 + 0.275899 I, 1.30656 + 0.541196 I, 1.17588 +
0.785695 I,
1. + 1. I, 0.785695 + 1.17588 I, 0.541196 + 1.30656 I,
0.275899 + 1.38704 I}
IDCT[list_] := Re[Fourier[
IDCTTwiddleFactors list] ][[{1, 8, 2, 7, 3, 6, 4, 5}]]
For example:
IDCT[DCT[input1]] - input1 // Chop
{0, 0, 0, 0, 0, 0, 0, 0}
The Two-Dimensional DCT
The one-dimensional DCT is useful in processing one-dimensional signals such as speech
waveforms. For analysis of two-dimensional (2D) signals such as images, we need a 2D version
of the DCT. For an n x m matrix s, the 2D DCT is computed in a simple way: The 1D DCT is
applied to each row of s and then to each column of the result. Thus, the transform of s is given
by
s( x, y) cos (
2x + 1) uπ
C(u) C(v) ε ε cos ( + ) π u = 0,...,n v = 0,...,m
m−1 n−1
2 2y 1 u
S(u,v) =
nm y=0 x=0 2n 2m
where C(u) = 2−1 2 for u = 0
=1 otherwise
Since the 2D DCT can be computed by applying 1D transforms separately to the rows and
columns, we say that the 2D DCT is separable in the two dimensions.
6. Mathematica Journal, 4(1), 1994, p. 81-88
As in the one-dimensional case, each element S(u, v) of the transform is the inner product of the
input and a basis function, but in this case, the basis functions are n x m matrices. Each
two-dimensional basis matrix is the outer product of two of the one-dimensional basis vectors.
For n = m = 8, the following expression creates an 8 x 8 array of the 8 x 8 basis matrices, a
tensor with dimensions {8, 8, 8, 8}:
DCTTensor = Array[
Outer[Times, DCTMatrix[[#1]], DCTMatrix[[#2]]]&,
{8, 8}];
Each basis matrix can be thought of as an image. The 64 basis images in the array are shown in
Figure 2.
The package GraphicsImage.m, included in the electronic supplement, contains the functions
GraphicsImage and ShowImage to create and display a graphics object from a given matrix.
GraphicsImage uses the built-in function Raster to translate a matrix into an array of gray
cells. The matrix elements are scaled so that the image spans the full range of graylevels. An
optional second argument specifies a range of values to occupy the full grayscale; values outside
the range are clipped. The function ShowImage displays the graphics object using Show.
<< GraphicsImage.m
Show[GraphicsArray[
Map[GraphicsImage[#, {-.25, .25}]&,
Reverse[DCTTensor],
{2}] ]];
Figure 2. The 8 x 8 array of basis images for the two-dimensional discrete cosine transform.
Each basis matrix is characterized by a horizontal and a vertical spatial frequency. The matrices
shown here are arranged left to right and bottom to top in order of increasing frequencies.
To illustrate the 2D transform, we apply it to an 8 x 8 image of the letter A:
7. Mathematica Journal, 4(1), 1994, p. 81-88
ShowImage[ input2 =
{{0, 1, 0, 0, 0, 1, 0, 0}, {0, 1, 0, 0, 0, 1, 0, 0},
{0, 1, 1, 1, 1, 1, 0, 0}, {0, 1, 0, 0, 0, 1, 0, 0},
{0, 0, 1, 0, 1, 0, 0, 0}, {0, 0, 1, 0, 1, 0, 0, 0},
{0, 0, 1, 0, 1, 0, 0, 0}, {0, 0, 0, 1, 0, 0, 0, 0}}]
-Graphics-
As in the 1D case, it is possible to express the 2D DCT as an array of inner products (a tensor
contraction):
output2 = Array[
(Plus @@ Flatten[DCTTensor[[#1, #2]] input2])&,
{8, 8}];
ShowImage[output2]
-Graphics-
The pixels in this DCT image describe the proportion of each two-dimensional basis function
present in the input image. The pixels are arranged as above, with horizontal and vertical
frequency increasing from left to right and bottom to top, respectively. The brightest pixel in the
lower left corner is known as the DC term, with frequency {0, 0}. It is the average of the pixels
in the input, and is typically the largest coefficient in the DCT of "natural" images.
An inverse 2D IDCT can also be computed in terms of DCTTensor; we leave this as an exercise
for the reader.
Since the two-dimensional DCT is separable, we can extend our function DCT to the case of
two-dimensional input as follows:
8. Mathematica Journal, 4(1), 1994, p. 81-88
DCT[array_?MatrixQ] :=
Transpose[DCT /@ Transpose[DCT /@ array] ]
This function assumes that its input is an 8 x 8 matrix. It takes the 1D DCT of each row,
transposes the result, takes the DCT of each new row, and transposes again. This function is
more efficient than computing the tensor contraction shown above, since it exploits the built-in
function InverseFourier.
We compare the result of this function to that obtained using contraction of tensors :
DCT[input2] - output2 // Chop // Abs // Max
0
The definition of the inverse 2D DCT is straightforward:
IDCT[array_?MatrixQ] :=
Transpose[IDCT /@ Transpose[IDCT /@ array] ]
As an example, we invert the transform of the letter A:
ShowImage[Chop[IDCT[output2]]];
As noted earlier, the components of the DCT output indicate the magnitude of image components
at various 2D spatial frequencies. To illustrate, we can set the last row and column of the DCT of
the letter A equal to zero:
output2[[8]] = Table[0, {8}];
Do[output2[[i, 8]] = 0, {i, 8}];
Now take the inverse transform:
9. Mathematica Journal, 4(1), 1994, p. 81-88
ShowImage[Chop[IDCT[output2]]];
The result is a blurred letter A: the highest horizontal and vertical frequencies have been removed.
This is easiest to see when the image is reduced in size so that individual pixels are not as visible.
2D Blocked DCT
To this point, we have defined functions to compute the DCT of a list of length n = 8 and the 2D
DCT of an 8 x 8 array. We have restricted our attention to this case partly for simplicity of
exposition, and partly because when it is used for image compression, the DCT is typically
restricted to this size. Rather than taking the transformation of the image as a whole, the DCT is
applied separately to 8 x 8 blocks of the image. We call this a blocked DCT.
To compute a blocked DCT, we do not actually have to divide the image into blocks. Since the
2D DCT is separable, we can partition each row into lists of length 8, apply the DCT to them,
rejoin the resulting lists, and then transpose the whole image and repeat the process:
DCT[list_?(Length[#]>8&)] :=
Join @@ (DCT /@ Partition[list, 8])
It may be worth tracing the progress of this deceptively simple piece of code as it works upon a
16 x 16 image. First, we observe the order in which Mathematica stores the three rules we have
given for DCT:
?DCT
Global`DCT
DCT[(array_)?MatrixQ] := Transpose[DCT /@ Transpose[DCT /@
array]]
DCT[(list_)?(Length[#1] > 8 & )] :=
Apply[Join, DCT /@ Partition[list, 8]]
DCT[list_] := Re[DCTTwiddleFactors*
InverseFourier[N[list[[{1, 3, 5, 7, 8, 6, 4, 2}]]]]]
When evaluating DCT of a 16 x 16 image, Mathematica begins by checking the first rule. It
recognizes that the input is a matrix, and thus invokes the rule and applies DCT to each row.
10. Mathematica Journal, 4(1), 1994, p. 81-88
When DCT is applied to a row of length 16, the second rule comes into play. The row is
partitioned into two lists of length 8, and DCT is applied to each. These applications invoke the
last rule, which simply computes the 1D DCT of the lists of length 8. The two sub-rows are then
rejoined by the second rule. After each row has been transformed in this way, the entire matrix is
transposed by the first rule. The process of partitioning, transforming, and rejoining each row is
then repeated, and the resulting matrix is transposed again.
For a test image, we provide a small 64 x 64 picture of a space shuttle launch. We use the utility
function ReadImageRaw, defined in the package GraphicsImage.m to read a matrix of
graylevels from a file:
shuttle = ReadImageRaw["shuttle", {64, 64}];
ShowImage[shuttle]
-Graphics-
Applying the DCT to this image gives an image consisting of 8 x 8 blocks, each block a DCT of
8 x 8 pixels:
ShowImage[DCT[shuttle], {-300, 300}]
-Graphics-
The lattice of bright dots is formed by the DC coefficients from each of the DCT blocks. To
reduce the dominance of these terms, we display the image with a clipped graylevel range. Note
also the greater activity in the lower left compared to the upper right, which corresponds mainly
to uniform sky.
11. Mathematica Journal, 4(1), 1994, p. 81-88
The inverse DCT of a list of length greater than 8 is defined in the same way as the forward
transform:
IDCT[list_?(Length[#]>8&)] :=
Join @@ (IDCT /@ Partition[list, 8])
Here is a simple test:
shuttle - IDCT[DCT[shuttle]] // Chop // Abs // Max
0
Quantization
DCT-based image compression relies on two techniques to reduce the data required to represent
the image. The first is quantization of the image's DCT coefficients; the second is entropy coding
of the quantized coefficients. Quantization is the process of reducing the number of possible
values of a quantity, thereby reducing the number of bits needed to represent it. Entropy coding is
a technique for representing the quantized data as compactly as possible. We will develop
functions to quantize images and to calculate the level of compression provided by different
degrees of quantization. We will not implement the entropy coding required to create a
compressed image file.
A simple example of quantization is the rounding of reals into integers. To represent a real
number between 0 and 7 to some specified precision takes many bits. Rounding the number to
the nearest integer gives a quantity that can be represented by just three bits.
x = Random[Real, {0, 7}]
2.78452
Round[x]
3
In this process, we reduce the number of possible values of the quantity (and thus the number of
bits needed to represent it) at the cost of losing information. A "finer" quantization, that allows
more values and loses less information, can be obtained by dividing the number by a weight
factor before rounding:
w = 1/4;
Round[x/w]
11
Taking a larger value for the weight gives a "coarser" quantization.
Dequantization, which maps the quantized value back into its original range (but not its original
precision) is acheived by multiplying the value by the weight:
12. Mathematica Journal, 4(1), 1994, p. 81-88
w * % // N
2.75
The quantization error is the change in a quantity after quantization and dequantization. The largest
possible quantization error is half the value of the quantization weight.
In the JPEG image compression standard, each DCT coefficient is quantized using a weight that
depends on the frequencies for that coefficient. The coefficients in each 8 x 8 block are divided by
a corresponding entry of an 8 x 8 quantization matrix, and the result is rounded to the nearest
integer.
In general, higher spatial frequencies are less visible to the human eye than low frequencies.
Therefore, the quantization factors are usually chosen to be larger for the higher frequencies. The
following quantization matrix is widely used for monochrome images and for the luminance
component of a color image. It is given in the JPEG standards documents, yet is not part of the
standard, so we call it the "de facto" matrix:
qLum =
{{16, 11, 10, 16, 24, 40, 51, 61},
{12, 12, 14, 19, 26, 58, 60, 55},
{14, 13, 16, 24, 40, 57, 69, 56},
{14, 17, 22, 29, 51, 87, 80, 62},
{18, 22, 37, 56, 68,109,103, 77},
{24, 35, 55, 64, 81,104,113, 92},
{49, 64, 78, 87,103,121,120,101},
{72, 92, 95, 98,112,100,103, 99}};
Displaying the matrix as a grayscale image shows the dependence of the quantization factors on
the frequencies:
ShowImage[qLum];
To implement the quantization process, we must partition the transformed image into 8 x 8
blocks:
BlockImage[image_, blocksize_:{8, 8}] :=
Partition[image, blocksize] /;
And @@ IntegerQ /@ (Dimensions[image]/blocksize)
The function UnBlockImage reassembles the blocks into a single image:
13. Mathematica Journal, 4(1), 1994, p. 81-88
UnBlockImage[blocks_] :=
Partition[
Flatten[Transpose[blocks, {1, 3, 2}]],
{Times @@ Dimensions[blocks][[{2, 4}]]}]
For example:
Table[i + 8(j-1), {j, 4}, {i, 6}] // MatrixForm
1 2 3 4 5 6
9 10 11 12 13 14
17 18 19 20 21 22
25 26 27 28 29 30
BlockImage[%, {2, 3}] // MatrixForm
1 2 3 4 5 6
9 10 11 12 13 14
17 18 19 20 21 22
25 26 27 28 29 30
UnBlockImage[%] // MatrixForm
1 2 3 4 5 6
9 10 11 12 13 14
17 18 19 20 21 22
25 26 27 28 29 30
Our quantization function blocks the image, divides each block (element-by-element) by the
quantization matrix, reassembles the blocks, and then rounds the entries to the nearest integer:
DCTQ[image_, qMatrix_] :=
Map[(#/qMatrix)&,
BlockImage[image, Dimensions[qMatrix]],
{2}] // UnBlockImage // Round
The dequantization function blocks the matrix, multiplies each block by the quantization factors,
and reassembles the matrix:
IDCTQ[image_, qMatrix_] :=
Map[(# qMatrix)&,
BlockImage[image, Dimensions[qMatrix]],
{2}] // UnBlockImage
To show the effect of quantization, we will transform, quantize, and reconstruct our image of the
shuttle using the quantization matrix introduced above:
qshuttle = shuttle //
DCT // DCTQ[#, qLum]& // IDCTQ[#, qLum]& // IDCT;
14. Mathematica Journal, 4(1), 1994, p. 81-88
For comparison, we show the original image together with the quantized version:
Show[GraphicsArray[
GraphicsImage[#, {0, 255}]& /@ {shuttle, qshuttle}]];
Note that some artifacts are visible, particularly around high-contrast edges. In the next section,
we will compare the visual effects and the amount of compression obtained from different
degrees of quantization.
Entropy
To measure how much compression is obtained from a quantization matrix, we use a famous
theorem of Claude Shannon [Shannon and Weaver 1949]. The theorem states that for a sequence
of symbols with no correlations beyond first order, no code can be devised to represent the
sequence that uses fewer bits per symbol than the first-order entropy, which is given by
h = − ε pi log 2 ( pi )
i
where pi is the relative frequency of the ith symbol.
To compute the first-order entropy of a list of numbers, we use the function Frequencies,
from the standard package Statistics`DataManipulation`. This function computes the
relative frequencies of elements in a list:
(shac poisson) In[1]:= (7/12/94 at 8:58:26 AM)
Frequencies[list_List] :=
Map[{Count[list, #], #}&, Union[list]]
Characters["mississippi"]
{m, i, s, s, i, s, s, i, p, p, i}
Frequencies[%]
{{4, i}, {1, m}, {2, p}, {4, s}}
Calculating the first-order entropy is straightforward:
15. Mathematica Journal, 4(1), 1994, p. 81-88
Entropy[list_] :=
- Plus @@ N[# Log[2, #]]& @
(First[Transpose[Frequencies[list]]]/Length[list])
For example, the entropy of a list of four distinct symbols is 2, so 2 bits are required to code each
symbol:
Entropy[{"a", "b", "c", "d"}]
2.
Similarly, 1.82307 bits are required for this longer list with four symbols:
Entropy[Characters["mississippi"]]
1.82307
A list with more symbols and fewer repetitions requires more bits per symbol:
Entropy[Characters["california"]]
2.92193
The appearance of fractional bits may be puzzling to some readers, since we think of a bit as a
minimal, indivisible unit of information. Fractional bits are a natural outcome of the use of what
are called "variable word-length" codes. Consider an image containing 63 pixels with a greylevel
of 255, and one pixel with a graylevel of 0. If we employed a code which used a symbol of
length 1 bit to represent 255, and a symbol of length 2 bits to represent 0, then we would need 65
bits to represent the image, or in terms of the average bit-rate, 65/64 = 1.0156 bits/pixel. The
entropy as calculated above is a lower bound on this average bit-rate.
The compression ratio is another frequently used measure of how effectively an image has been
compressed. It is simply the ratio of the size of the image file before and after compression. It is
equal to the ratio of bit-rates, in bits/pixel, before and after compression. Since the initial bit-rate is
usually 8 bits/pixel, and the entropy is our estimate of the compressed bit-rate, the compression
ratio is estimated by 8/entropy.
We will use the following function to examine the effects of quantization:
f[image_, qMatrix_] :=
{Entropy[Flatten[#]], IDCT[IDCTQ[#, qMatrix]]}& @
DCTQ[DCT[image], qMatrix]
This function transforms and quantizes an image, computes the entropy, and dequantizes and
reconstructs the image. It returns the entropy and the resulting image. A simple way to
experiment with different degrees of quantization is to divide the "de facto" matrix qLum by a
scalar and look at the results for various values of this parameter:
test = f[shuttle, qLum/#]& /@ {1/4, 1/2, 1, 4};
Here are the reconstructed images and the corresponding entropies:
16. Mathematica Journal, 4(1), 1994, p. 81-88
Show[GraphicsArray[
Partition[
Apply[
ShowImage[#2, {0, 255}, PlotLabel -> #1,
DisplayFunction -> Identity]&,
test, 1],
2] ] ]
0.838331 1.25015
1.7534 3.10061
-GraphicsArray-
Timing
Most of the computation time required to transform, quantize, dequantize, and reconstruct an
image is spent on forward and inverse DCT calculations. Because these transforms are applied to
blocks, the time required is proportional to the size of the image. On a SUN Sparcstation 2, the
timings increase (at a rate of 0.005 second/pixel) from about 20 seconds for a 642 pixel image to
about 320 seconds for 2562 pixels.
These times are much longer than for comparable functions written in a low-level langauge such
as C. For example, a C program performed the same computations in under 2 seconds for an
image of 2562 pixels, more than 100 times faster than our Mathematica functions. However, for
the purposes for which our code was developed, namely education, algorithm development, and
prototyping other applications, the timings are acceptable.
17. Mathematica Journal, 4(1), 1994, p. 81-88
References
Ahmed, N., T. Natarajan, and K. R. Rao. 1974. On image processing and a discrete cosine
transform. IEEE Transactions on Computers C-23(1): 90-93.
Chen, W. H., and W. K. Pratt. 1984. Scene adaptive coder. IEEE Transactions on
Communications COM-32: 225-232.
Jain, A. K. 1989. Fundamentals of digital image processing . Prentice Hall: Englewood Cliffs,
NJ.
Puri, A. 1992. Video coding using the MPEG-1 compression standard. Society for Information
Display Digest of Technical Papers 23: 123-126.
Shannon, C. E., and W. Weaver. 1949. The mathematical theory of communication . Urbana:
University of Illinois Press.
Wallace, G. 1991. The JPEG still picture compression standard. Communications of the ACM
34(4): 30-44.
Watson, A. B. 1993. DCT quantization matrices visually optimized for individual images.
Proceedings of the SPIE 1913: 202-216 (Human Vision, Visual Processing, and Digital Display
IV. Rogowitz ed. SPIE. Bellingham, WA).
Biography
Andrew B. Watson is the Senior Scientist for Vision Research at NASA Ames Research Center
in Mountain View, California, where he works on models of visual perception and their
application to the coding, understanding, and display of visual information. He is the author of
over 60 articles on human vision, image processing, and robotic vision.
Electronic Supplement
The electronic supplement contains the packages DCT.m and GraphicsImage.m, and the
image file shuttle.
Andrew B. Watson
NASA Ames Research Center
beau@vision.arc.nasa.gov