SlideShare a Scribd company logo
1 of 10
Available online at www.sciencedirect.com
ScienceDirect
Procedia Computer Science 38 (2013) 196–205
www.elsevier.com/locate/procedia
* Corresponding author. E-mail address: psjkumar.@cam.ac.uk
1743-0905 © 2013 The Authors. Published by Elsevier B.V.
Peer-review under the responsibility of Computer Science Education
Advances in Parallel Computing
New Frontiers in Computing and Communications 2013
Intelligent Parallel Processing and Compound Image Compression
P.S.Jagadeesh Kumara*
, Krishna Moorthyb
a
University of Cambridge, United Kingdom
b
Anna University,Chennai, India
Abstract
Fashionable and staggering evolution in inferring the parallel processing routine coupled with the necessity to amass and
distribute huge magnitude of digital records especially compound images has fetched an amount of confronts for researchers and
other stakeholders. These disputes exorbitantly outlays and maneuvers the digital information among others, subsists the spotlight
of the research society in topical days and encompassed the lead to the exploration of image compression methods that can
accomplish exceptional outcomes. One of those practices is the parallel processing of a variety of compression techniques, which
facilitates split, an image into ingredients of reverse occurrences and has the benefit of tremendous compression. This manuscript
scrutinizes the computational intricacy and the quantitative optimization of diverse image compression tactics and additional
accede to the recital of parallel processing. The computational efficacy is scrutinized and estimated with respect to the Central
Processing Unit (CPU) as well as Graphical Processing Unit (GPU). The PSNR (Peak Signal to Noise Ratio) is exercised to
guesstimate image re-enactment and eminence in harmonization. The consequences are obtained and conferred support on
different image compression algorithms such as Block Truncation Coding (BTC), Discrete Cosine Transform (DCT), Discrete
Wavelet Transform (DWT), Dual Tree Complex Wavelet Transform (DTCWT), Set Partitioning in Hierarchical Trees (SPIHT),
Embedded Zero-tree Wavelet (EZW). The evaluation is conceded in provisos of coding efficacy, memory constraints, image
quantity and quality.
Keywords: Computational Paradigm, Quantitative Optimization, Parallel Processing, Compound Image Compression, Performance Evaluation
Nomenclature
BTC Block Truncation Coding DTCWT Dual Tree Complex Wavelet Transform
CPU Central Processing Unit EZW Embedded Zero-tree Wavelet
DCT Discrete Cosine Transform GPU Graphical Processing Unit
DWT Discrete Wavelet Transform SPIHT Set Partitioning In Hierarchical Trees
P.S.Jagadeesh Kumar / Procedia Computer Science 38 (2013) 196–205 197
1. Introduction
Digital image compression techniques facilitate to condense the liberty crucial to hoard or broadcast the image
statistics by altering the mode by which these images are signified. There are copious schemes for compacting
digital image statistics and apiece have its individual benefits and drawbacks. BTC has been exercised for numerous
days in compressing digital toneless images. BTC for toneless image is a plain and lossy image compression
method. BTC has the benefit of being simple to realize contrast to vector quantization and transform coding. The
BTC technique conserves the mean and the standard deviation of block. In the indoctrination process of BTC the
image is primarily segmented in to a set of non-overlying blocks, and then the initial two algebraic instants and the
bit plane are calculated. In the decoder, every encoded image blocks is re-enacted by means of the bit plane and the
two algebraic instants [1]
. It accomplishes 2 bpp (bits per pixel) with squat computational paradigm. The substantial
quantity of digital information on the web has fetched a number of disputes such as morbidly huge costs in provisos
of hoarding and transmitting the information. Consequently, Image compressions engross tumbling the quantity of
memory it gets to amass an image in turn to decrease these huge costs. Normally, Image processing is an extremely
calculation concentrated assignment. Captivating into deliberation, the image depiction and superiority, schemes or
methods for image processing must have scrupulous potential for absolute effects. The hasty growth of digital
imaging appliances, counting desktop printing, multimedia, teleconferencing, computer animations and apparition
has augmented the necessity for successful image compression practices [2]
.
2. Block Truncation Coding
Block Truncation Coding is a kind of lossy image compression method for grayscale images. It segregates the
unique image into blocks and then employs quantizer to decrease the amount of grey levels in every block at the
same time as preserving the identical mean and standard deviation. It is a premature precursor of the fashionable
hardware practice, while BTC compression technique was initially tailored to color stretched by means of an
analogous advance called Color Compact Compression. BTC has also subsisted to bespoke video compression. BTC
was originally anticipated by O.R. Mitchell at Purdue University. A new discrepancy of BTC is Total Instant Block
Truncation Coding (TIBTC), wherein as an alternative of utilizing the standard deviation the initial total instant is
potted beside the mean. TIBTC is computationally easier than BTC and also classically upshots in a Least Mean
Squared Error (LMSE). By means of sub-blocks of 8x8 pixels provides a compression ratio of 2:1 presumptuous to
an 8-bit integer values are exercised throughout diffusion or storage. Better blocks consent to superior compression
conversely quality also decreases with the augment in block dimension owing to the temperament of the algorithm.
A pixel image is segmented into blocks of classically 8x8 pixels [3]
. On behalf of every block the Mean and Standard
Deviation of the pixel rates are computed; these information usually revolutionize from block to block. The pixel
rates chosen for every reformed, block are preferred with the intention that every block of the compressed image
will contain the same mean and standard deviation as the consequent block of the unique image. A quantization on
the block which increases the compression is achieved as pursues:
( ) {
( ) ̅
( ) ̅
At this point x (i,j) are pixel ingredients of the unique block and y (i,j) are pixel ingredients of the condensed
block. Out loud this can be explicated as: If a pixel value is larger than the mean it is consigned the value "1", or else
"0". Significances equal to the mean can have whichever a "1" or a "0" depending on the predilection programmer of
the applying the algorithm. This 16 bit block is transmitted down the values of Mean and Standard Deviation.
Restoration is prepared with the values "a" and "b" which conserve the mean and the standard deviation as follows:
̅ √ ( )
P.S.Jagadeesh Kumar / Procedia Computer Science 38 (2013) 196–205 198
̅ √( )
Where is the Standard Deviation, m is the overall quantity of pixels in the block and q is the amount of pixels
better than the mean ( ̅). To rebuild the image, or generate its ballpark figure, ingredients consigned a “0” are
reinstated with the "a" value and ingredients consigned a “1” are reinstated with the "b" value.
( ) {
( )
( )
This expresses that the method is asymmetric since the encoder has a great deal to do than the decoder. This is
for the reason that the decoder is merely restoring 1's and 0's with the projected value while the encoder is obligatory
to compute the mean, standard deviation as well as the values a and b.
3. Discrete Cosine Transform
Discrete Cosine Transform (DCT) articulates a predetermined series of data points in provisos of a summation of
cosine functions vacillating at miscellaneous frequencies. DCTs are significant to frequent functions from lossy
compression of multimedia files to spectral techniques for the statistical resolution of partial differential equations.
The exercise of cosine more willingly than sine functions is decisive for compression, because it twirls out that
fewer cosine functions are desired to estimate an archetypal signal, while for differential equations the cosines
articulate a meticulous option of boundary restrictions. Especially, DCT is a Fourier-related transform analogous to
the discrete Fourier transform (DFT), however uses only real numbers. DCTs are the same to DFTs of almost double
the time taken, working on real data with even proportion, in which several deviations the input and output data are
modified by half a section. At hand are eight standard DCT variants, out of which four are widespread. The majority
of discrete cosine transform is the type-II DCT, which is frequently entitled just as "the DCT". On contrary, the
type-III DCT is likewise entitled as "the inverse DCT". Two correlated transforms are the Discrete Sine Transform
(DST), which is the same as a DFT of odd and real functions, and the Modified Discrete Cosine Transform
(MDCT), which is predestined on a DCT of overlapping facts. Properly, the discrete cosine transform is a linear,
invertible function or consistently an invertible N × N square matrix formulated as;
( ( ) ) ∑ [( | ) ]
Further multiplying the x0 and xN-1 stipulations by√ , and in the same way multiplying the x0 and xN-1
stipulations by √ formulates the DCT matrix orthogonal but shatters the unswerving association with a real-even
DFT. The DCT-I is precisely the same to a DFT of 2N-2 real numbers with even symmetry. On the other hand, that
the DCT-I is not distinct for N less than 2. Therefore, the DCT-I matches to the edge prerequisites: xn is even about
n=0 and even about n=N-1; likewise for Xk as described below;
∑ [( | ( ) ]
The DCT-II is almost certainly the most usually exploited structure. This transform is precisely the same up to a
general scale factor of 2. DFT of 4N real inputs of even symmetry with the even-indexed ingredients are zero.
Specifically, it is half of the DFT of the 4N inputs yn, where y2n = 0, y2n+1 = xn for 0 ≤ n < N, y2N = 0, and y4N-n = yn
for 0 < n < 2N. Multiplying the x0 stipulations by 1/√ and again multiplying the ensuing matrix by a general scale
factor of √ constructs the DCT-II matrix orthogonally, but shatters the unswerving association with a real-even
P.S.Jagadeesh Kumar / Procedia Computer Science 38 (2013) 196–205 199
DFT of half-modified input. In numerous applications, for instance JPEG, the scaling is capricious since scale
factors can be shared with a consequent computational step and a scaling that can be preferred which lets the DCT to
be figured with lesser multiplications. The DCT-II entails the peripheral states: xn is even about n= -1/2 and even
about n=N-1/2; Xk is even about k=0 and odd about k=N portrayed as;
∑ [( | ( ))]
Since it is the inverse of DCT-II, this structure is now and then termed as "the inverse DCT". Dividing the x0
term by √ in the place of 2 and multiplying the ensuing matrix by a general scale factor of √ in order that the
DCT-II and DCT-III are reverse of each other [4]
. This constructs the DCT-III matrix orthogonal, but shatters the
straight association with a real-even DFT of partially modified output. The DCT-III involves the edge states: xn is
even about n=0 and odd about n=N; Xk is even about k=-1/2 and odd about k=N-1/2.
∑ [( | ( )( ))]
The DCT-IV matrix befalls orthogonal if it is further multiplied by a general scale factor of √ . A deviation
of the DCT-IV, wherever the information from diverse transforms is extending beyond, referred as the “MDCT”.
The DCT-IV entails the edge stipulations: xn is even about n= -1/2 and odd about n=N-1/2; likewise for Xk. DCTs of
natures I-IV care for both precincts constantly concerning the point of symmetry: they are even/odd about whichever
data point for both precincts or intermediate amid the two data points for both precincts. Distinctly, DCTs of kinds
V-VIII entails precincts that are even/odd about a data point for each boundary and intermediate amid two data
points for the further periphery. Conversely, DCT types I-IV are corresponding to real-even DFTs of even order
since the equivalent DFT is of length 2(N−1) (for DCT-I) or 4N (for DCT-II/III) or 8N (for DCT-IV). The four
supplementary kinds of Discrete Cosine Transform match effectively to real-even DFTs of rationally odd sequence,
which have aspects of N±½ in the denominators of the cosine rows. Nevertheless, these deviations appear to be
hardly ever exercised in observe. One basis, perchance, is that FFT algorithms for odd-length DFTs are normally
more convoluted than FFT algorithms for even-length DFTs, and these augmented obscurities clutch the DCTs.
4. Discrete Wavelet Transform
In statistical inquiry and practical inspection, a discrete wavelet transform (DWT) is any wavelet transform for
which the wavelets are disjointedly illustrated. Because with wavelet transforms, the chief improvement over
Fourier transforms is its temporal resolution: it incarcerates both incidence and position information. The discrete
wavelet transform has a vast application in science and engineering. Conspicuously, it is second-hand for signal
coding, to signify a discrete signal in an additional superfluous structure, frequently as a prerequisite for data
compression. Realistic functions can also be established in signal processing of hastening for gait investigation, in
digital communications and a lot of further. It is revealed that DWT is productively realized as analog filter bank in
biomedical processing for devise of pacemakers and as well in ultra-wideband (UWB) wireless communications.
The DWT of a signal x is computed by ephemeral a sequence of filters. Primarily the trials are conceded through a
low pass filter with inclination „g‟ ensuing in a convolution as underlined;
[ ] ( )[ ] ∑ [ ] [ ]
P.S.Jagadeesh Kumar / Procedia Computer Science 38 (2013) 196–205 200
The signal is also festered concurrently exploiting a high-pass filter h. The production provides the aspect
coefficients from the high-pass filter and estimate coefficients from the low-pass filter. It is significant that the two
filters are interrelated and they are recognized as a quadrature mirror filter [5]
.
5. Dual Tree Complex Wavelet Transform
The Complex Wavelet Transform (CWT) is comparatively a topical reinforcement to the DWT, with significantly
extra chattels: It is almost shift invariant and directionally discerning in higher dimensions. It attains this with 2d
redundancy factor of considerably inferior to that of DWT. The multidimensional dual-tree CWT is non-discrete but
supported by a computationally proficient, discrete filter bank. CWT is a complex valued conservatory to the typical
DWT. It is a 2D wavelet transform which gives multi-resolution, light depiction, and practical description of the
configuration of an image. Additionally, it provides a high magnitude of shift-invariance in its dimension. On the
other hand, a downside to this transform is that it shows 2d
redundancy compared to a separable DWT, where d is
the dimension of the signal distorted. The exploitation of complex wavelets was initially set up by J.M. Lina and L.
Gagnon in the construction of the Daubechies orthogonal filters banks in 1995. It was then widespread by Prof. Nick
Kingsbury of Cambridge University in 1997. In computer vision, by developing the outset of visual perspectives,
one can rapidly spotlight on contender sections, where objects of interest may be established, and then calculate the
added traits through the CWT. These added characteristics, while not obligatory for comprehensive sections, are
practical in precise recognition and appreciation of less significant objects. Likewise, the CWT may be functional in
identifying the triggered voxels of cortex and moreover the temporal independent component analysis may be
developed to remove the fundamental autonomous springs whose figure is strong-minded by Bayesian information
standard. The Dual-tree complex wavelet transform (DTCWT) estimates the complex transform of a gesture by
means of the two disconnect DWT putrefactions tree “a” and tree “b” as shown in Fig.1. If the filters employed in
one are exclusively intended dissimilar from those in the auxiliary is probable for one DWT to construct the real
coefficients and the further the imaginary. This idleness of two gives additional data for investigation but at the
outlay of superfluous computational supremacy [6]
. It also affords ballpark shift-invariance hitherto still permits ideal
re-enactment of the gesture. The devise of the filters is predominantly significant for the transform to transpire
fittingly and the essential individualities are:
(1) The low-pass filters in the two trees ought to diverge by half a trial interlude
(2) Re-enactment filters are the repeal of investigation
(3) All filters are from the similar orthonormal set
(4) Tree “a” filters are the overturn of tree “b” filters
(5) Both trees have the similar incidence rejoinder
Fig. 1. Block diagram for a 3-level DTCWT.
P.S.Jagadeesh Kumar / Procedia Computer Science 38 (2013) 196–205 201
6. Embedded Zero-tree Wavelet Transforms
Embedded Zero-tree of Wavelet transforms (EZW) is a lossy image compression technique. At high compression
ratios, the majority of the coefficients fashioned by a sub-band transform for instance, the wavelet transform resolve
to be zero, or extremely close to zero. This happens since "real world" images are inclined to enclose typically low
frequency information that is extremely interrelated. Nevertheless, high frequency information does happen such as
boundaries in the image this is predominantly significant in stipulation of human insight of the image excellence,
and thus must be symbolized precisely in any high eminent coding system. By making an allowance for the distorted
coefficients as a trees with the lowest incidence coefficients at the origin node and with the offspring of every tree
node being the spatially associated coefficients in the subsequent higher frequency sub-band, present is a high
likelihood that one or more sub-trees will include exclusive coefficients which are zero or almost zero, such sub-
trees are called zero-trees. Owing to this, the terms node and coefficient exchange-ably are exercised, and referring
to the offspring of a coefficient, it means the child coefficients of the node in the tree where that coefficient is
situated. Offspring refers to straightly associated nodes inferior in the tree and a descendant refers to all nodes which
are beneath a meticulous node in the tree, yet if not straightly associated.
In zero-tree based image compression system such as EZW and SPIHT, the intention is to employ the algebraic
chattels of the trees in turn to competently code the positions of the momentous coefficients. Because the majority of
the coefficients will be zero or almost zero, the spatial positions of the important coefficients structure a huge
section of the entire amount of a distinctive compressed image. A coefficient is measured momentous if its scale or
magnitudes of a node and its entire offspring in the case of a tree is on top of a scrupulous threshold. By preparing
with a threshold which is utmost coefficient magnitudes and iteratively declining the threshold, it is probable to
generate a compressed depiction of an image which gradually adjoins better features. Owing to the constitution of
the trees, it is expected that if a coefficient in a meticulous incidence group is inconsequential, then its entire
offspring including the spatially associated higher frequency band coefficients will also be inconsequential. EZW
employs four signs to symbolize (a) a zero-tree origin, (b) a remote zero coefficient which is inconsequential, but
which has momentous offspring, (c) momentous optimistic coefficient and (d) momentous pessimistic coefficient.
The signs may be thus characterized by two binary bits. The compression method includes a number of iterations
from side to side a prevailing pass and a subsidiary pass, the threshold is rationalized by a factor of two. The
prevailing pass encodes the connotation of the coefficients which have not hitherto been established momentous in
previous iterations, by scrutinizing the trees and emanating one of the four signs. The offspring of a coefficient are
only scrutinized if the coefficient is established to be momentous, or if the coefficient was a remote zero. The
subsidiary pass emanates one bit, the most important bit of each coefficient that are not so far emanated for every
coefficient which has been established momentous in the preceding connotation pass. The subsidiary pass is hence
analogous to bit-plane coding. There are numerous significant attributes to note. Primarily, it is probable to impede
the compression procedure at any instance and acquire a ballpark figure of the unique image, the better the number
of bits acknowledged, the superior is the image. Secondly, owing to the means in which the compression procedure
is controlled as a sequence of resolutions, the similar procedure can be sprint at the decoder to rebuild the
coefficients, but with the resolutions being considered in accordance with the inward bit stream. In realistic
performance, it would be customary to employ an entropy code such as arithmetic code to additionally enhance the
recital of the prevailing pass. Bits from the subsidiary pass are typically haphazard that the entropy coding does not
afford additional coding gain [7]
. The coding recital of EZW has ever since been surpassed by SPIHT and its
numerous plagiaristic. The EZW procedure also holds the following aspects:
(1) Discrete wavelets transform can utilize a condensed multi-resolution depiction in the image.
(2) Zero tree coding offers an opaque multi-resolution depiction of connotation plot.
(3) Consecutive estimation for a condensed multi-precision depiction of the momentous coefficients.
(4) Prioritization etiquette is strong-minded by the exactitude, extent, balance, and spatial position of the
wavelet coefficients.
(5) Modified multilevel arithmetic coding for entropy coding the sequence of signs.
P.S.Jagadeesh Kumar / Procedia Computer Science 38 (2013) 196–205 202
7. Set Partitioning in Hierarchical Trees
Set partitioning in hierarchical trees (SPIHT) is an image compression technique that develops the intrinsic
resemblance athwart the sub-bands in a wavelet decomposition of an image. The method codes mainly the
imperative wavelet transform coefficients initially, and broadcasts the bits so that a progressively more sophisticated
facsimile of the unique image can be attained gradually. SPIHT algorithm is supported on embedded zero tree
wavelet (EZW) coding technique; it utilizes spatial orientation trees and employs set partition sorting procedure.
Coefficients analogous to the identical spatial position in diverse sub bands in the pyramid composition exhibit self-
similar distinctiveness. SPIHT describes parent children relationships amid these self-similar sub-bands to ascertain
spatial orientation trees [8]
.
Step1: In the classification pass, the List of Insignificant Pixel (LIP) is scrutinized to decide if ingress is
momentous at the existing threshold. If ingress is established to be considerable, output a bit „1‟ and any more bit for
the precursor of the coefficient, which is manifest by “1” for optimistic or “0” for pessimistic. Now the momentous
ingress is enthused to the list of significant pixel (LSP).
Step2: Ingress in List of Insignificant Set (LIS) is practiced. When ingress is the deposit of all offspring of a
coefficient, named „type 1‟, extent checks for all offspring‟s of the existing ingress are conceded to fix on if they are
momentous or not. If the ingress is established to be momentous, the straight offspring‟s of the ingress endures
extent examinations. If straight offspring is momentous, it is enthused into LIP; or else it is enthused into LSP. If the
ingress is considered to be inconsequential, this spatial orientation tree entrenched by the existing ingress was a
zero-tree, thus a bit „0‟ is output and no more dispensation is required. Lastly, this ingress is enthused to the
conclusion of LIS as „type 2‟, which is the set of all offspring apart from for the instantaneous offspring of a
coefficient. If the ingress in LIS is type 2, connotation examination is carried out on the offspring of its straight
offspring. If connotation examination is true, the spatial orientation tree with origin of type 2 ingress is divided into
four sub-trees that are entrenched by the straight offspring and these straight offspring‟s are added in the last part of
LIS as type 1 ingression. The significant obsession in LIS categorization is that complete deposits of inconsequential
coefficients, zero-trees, are symbolized with a single zero. The reason behind defining spatial parent-children
relationships is to enhance the likelihood of discovering these zero-trees.
Step3: Lastly, modification pass is exploited to yield the modification bits (nth bit) of the coefficients in LSP at
existing threshold. Prior the algorithm progress to the subsequent round, the existing threshold is bisected.
8. Computational Paradigm and Quantitative Optimization
Nevertheless, the undeviating function would necessitate O(N2
) operations; it is probable to calculate the same
with only O(N log N) intricacy by factorizing the calculation likewise to the Fast Fourier transform (FFT). It is
possible to calculate DCTs via FFTs shared with O(N) pre-processing and post-processing stepladder. In common,
O(N log N) methods to calculate DCTs are recognized as Fast Cosine Transform (FCT) algorithms. The most
competent algorithms, in standard, are typically those that are focused directly for the DCT, as divergent to employ
a normal FFT in addition with O(N) further operations. Conversely, even "dedicated" DCT algorithms including all
of those that attain the least known arithmetic counts, as a minimum for power of two sizes are classically intimately
associated to FFT algorithms; since DCTs are basically DFTs of real-even data, it is oblige to devise a fast DCT
algorithm by captivating an FFT and removing the redundant operations owing to this equilibrium. Algorithms
based on the Cooley–Tukey FFT algorithm are most familiar; however other FFT algorithm is also appropriate. For
instance, the Winograd FFT algorithm guides to minimal-multiplication algorithms for the DFT, even though
usually at the outlay of further superfluities, and an analogous algorithm was anticipated by Feig and Winograd for
the DCT. Since the procedures for DFTs, DCTs, and related transforms are all so intimately associated, any
development in algorithms for one transform will tentatively guide to instant gains for the other transforms as well.
Whilst DCT algorithms that utilize an original FFT frequently have several conjectural overhead contrasts to the
most specific DCT algorithms, the earlier also have a discrete benefit: extremely optimized FFT series are
P.S.Jagadeesh Kumar / Procedia Computer Science 38 (2013) 196–205 203
extensively obtainable. Therefore, put into practice, it is frequently simpler to attain high recital for general lengths
N with FFT-based algorithms. Recital on recent hardware is characteristically not subjugated merely by arithmetic
reckons, and optimization necessitates considerable engineering endeavor. Dedicated DCT algorithms, alternatively,
perceive extensive employ for transforms of little, preset sizes such as the 8x8 DCT-II worned in JPEG
compression, or the small DCTs or MDCTs normally exercised in audio compression. Abridged code size may also
be a cause to employ a dedicated DCT for embedded purposes. Actually, the DCT algorithms employing a regular
FFT are occasionally alike to trimming the superfluous procedures from a better FFT of real-symmetric data, and
they can still be optimal from the perception of arithmetic counts. For instance, a type-II DCT is the same to a DFT
of size 4N with real-even symmetry whose even-indexed rudiments are zero. The most general technique for
calculating this is by means of an FFT and this technique in retrospect can be perceived as one step of a radix-4
Cooley–Tukey algorithm realistic to the "rational" real-even DFT analogous to the DCT II. The radix-4 stride eases
the magnitude 4N DFT to four size-N DFTs of real statistics, two of which are zero and two of which are equivalent
to one another by the constant equilibrium, therefore profuse a single size-N FFT of real data in addition to O(N)
operations. Since the even-indexed rudiments are zero, this radix-4 step is unerringly similar to a split-radix stride; if
the consequent size-N real-data FFT is also achieved by a real-data split-radix algorithm, then the ensuing algorithm
in reality bouts the lowest available arithmetic count for the DCT-II (2N log2 N – N + 2). Consequently, there is
nothing inherently awful about computing the DCT by means of an FFT from an arithmetic viewpoint; it is every so
often simply an inquiry of whether the analogous FFT algorithm is optimal. The filter bank realization of the
Discrete Wavelet Transform takes only O(N) in definite cases, as compared to O(N log N) for the fast Fourier
transform. The wavelet filter bank does each of these two O(N) convolutions, then divides the signal into two twigs
of size N/2. However, it only recursively splits the upper branch convolved with y[n] as distinguished with the FFT,
which recursively splits together the superior branch and the inferior branch. This leads to the following which
directs to an O(N) time for the complete process, as can be revealed by a geometric series expansion of the relation;
( ) ( )
9. Parallel Processing Performance
In the midst of the development in computer expertise, nowadays multiple cores are accessible in personal
computers, yet Graphics Processing Unit also exists, which can be second-hand for parallel computing. Multi-core
processing unit is a structural design in which multiple processors are wrapped up into solitary integrated circuit by
means of core logic, which includes a number of high amateur dramatic cores [9]
. GPU is an electronic circuit
including hundreds of reasonably performing mainstay cores. It can be interleaved in the PCI slit. High Performance
Computing (HPC) is a collective computational supremacy, which would distribute much privileged recital than one
might obtain from a distinctive desktop computer or workplace to resolve better tribulations in engineering and
commercial needs. High Performance Computers are a group of computers; every computer in the group is well
thought-out as a node, which can be a distinct multi-core machine. Parallel processing structural designs can be
categorized into two kinds:
1. Distributed memory design
2. Shared Memory design
9.1. Parallel processing of Compound Image Compression Algorithm Using Static Partitioning
In static partitioning, the compound image is alienated into numerous assortments owing to the accessible
quantity of processors; underneath are the stepladders to be exercised:
1. Divide the specified image into a lot of arrays according to the number of accessible processors.
2. Every processor is anticipated to affect chronological compression algorithm on its sub-image with the
least dissemination amid the processors.
P.S.Jagadeesh Kumar / Procedia Computer Science 38 (2013) 196–205 204
The excellence of the condensed image will be of poor quality against those obtained by chronological
compression, since every processor will effort with a separation of the entire province and thus might not usually
obtain a high-quality counterpart with its assortment blocks as resolved to be acquired exercising the chronological
algorithm. This shortcoming can be surmounted by employing those processors containing adequate reminiscence to
seize the intact image in the entire province.
9.2. Parallel processing of Compound Image Compression Algorithm Using Dynamic Partitioning
In dynamic portioning, the compound image compression is parallel processed as pursued:
1. The unique image is divided into diminutive non-overlapping array blocks of identical dimension, for
example, the size of 4x4, 8x8 or 16x16.
2. All the array blocks are alienated into compound task blocks.
3. Designer can explore its essential computing nodes and then ascertain correlation algorithm by requisite.
4. Designer generates a string for every computing node which has ascertained correlation algorithm. This
string is accountable for transmitting compound task blocks to the analogous computing node and in receipt
of the computational outcomes.
5. All computational outcomes are arranged according to the Block identity of compound task blocks and then
printed into condensed dossier mutually.
Conclusion
Despite the fact that the alacrity of every individual processor prolong to climb at a continuous rapidity, the
computational necessities ambitious by applications have forever out-paced the existing technology. This asserts the
designer to ask for quicker and further lucrative processors. Parallel and distributed computing leave one step in
advance to afford computing supremacy ahead of the technical limits of a solitary processor organization. Together
hardware and software advances protract to formulate upgrading, apiece with its own appropriateness to a variety of
applications. Lucrative hardware schemes are at present extensively available on hand. Software-based compression
beforehand implausible is at the moment expanding from research to the commercial market. Single chip resolution
are extensively obtainable, but well-organized intend by means of parallel processing of multiple chips and
pipelining practices persist to effect in rapid encoder circuits for superior applications. Multimedia processors are
fetching progressively trendier as they can be lithely entrenched in multifaceted applications. At the same time as
numerous compound image compression elucidation can at this time be hold by a single processor, thoroughness to
attain privileged picture quality and inferior bit rates prolong to constrain the delve into compression to expand new-
fangled multifaceted procedures that can gain from the accessibility of parallel processing. Parallel processing can
for that reason be very successful for high-quality compound image compression where regulation of constraints and
compound passes of indoctrination are desired to produce the most excellent probable compressed compound image.
References
1. Ritu Chourasiya, Prof. Ajit Shrivastava. A Study of Image Compression Based Transmission Algorithm Using SPIHT for Low Bit Rate
Application. Advanced Computing: An International Journal, Vol.3, No.6, and November 2012. pp. 47-54.
2. K. S. Thyagarajan. Still Image and Video Compression with MATLAB. A John Wiley and Sons, Inc., Publication, 2011.
3. K.Somasundaram and I.Kaspar Raj. Low Computational Image Compression Scheme based on Absolute Moment Block Truncation Coding.
World Academy of Science, Engineering and Technology 19 2008. pp. 974-979.
4. Kamrul Hasan Talukder, Koichi Harada. Haar Wavelet Based Approach for Image Compression and Quality Assessment of Compressed
Image. IAENG International Journal of Applied Mathematics, 36:1, 2007.
5. Ritu, Ajit Shrivastava. A Study of Image Compression Based Transmission Algorithm Using SPIHT for Low Bit Rate Application. Advanced
Computing: An International Journal, Vol.3, No.6, and November 2012. pp. 47-54.
6. K.S.Thyagarajan. Still Image and Video Compression with MATLAB. A John Wiley and Sons, Inc., Publication, 2011.
7. K.Somasundaram and I.Kaspar Raj. Low Computational Image Compression Scheme based on Absolute Moment Block Truncation Coding.
World Academy of Science, Engineering and Technology 19 2008. pp. 974-979.
P.S.Jagadeesh Kumar / Procedia Computer Science 38 (2013) 196–205 205
8. Kamrul Hasan Talukder, Koichi Harada. Haar Wavelet Based Approach for Image Compression and Quality Assessment of
Compressed Image. IAENG International Journal of Applied Mathematics, 36:1, 2007.
9. S.T.Klein and Y.Wiseman. Parallel Huffman Decoding with Applications to JPEG Files. The Computer Journal, Vol. 46, No. 5, 2003.
pp. 487-497.
10. Mohammad Mofarreh Bonab et al. A New Technique for Image Compression Using PCA. International Journal of Computer Science
and Communication Networks 2(1), 2010, pp. 111-116.
11. Aysha.V, Kannan Balakrishnan. Parallel Genetic Alg. for Document Image Compression Optimization. Int. Con. on Electronics and
Information Engineering, 2010, IEEE.

More Related Content

What's hot

A Comparative Study of Image Compression Algorithms
A Comparative Study of Image Compression AlgorithmsA Comparative Study of Image Compression Algorithms
A Comparative Study of Image Compression AlgorithmsIJORCS
 
GPU_Based_Image_Compression_and_Interpolation_with_Anisotropic_Diffusion
GPU_Based_Image_Compression_and_Interpolation_with_Anisotropic_DiffusionGPU_Based_Image_Compression_and_Interpolation_with_Anisotropic_Diffusion
GPU_Based_Image_Compression_and_Interpolation_with_Anisotropic_DiffusionVartika Sharma
 
Image compression using Hybrid wavelet Transform and their Performance Compa...
Image compression using Hybrid wavelet Transform and their  Performance Compa...Image compression using Hybrid wavelet Transform and their  Performance Compa...
Image compression using Hybrid wavelet Transform and their Performance Compa...IJMER
 
DCT image compression
DCT image compressionDCT image compression
DCT image compressionyoussef ramzy
 
MULTIPLE RECONSTRUCTION COMPRESSION FRAMEWORK BASED ON PNG IMAGE
MULTIPLE RECONSTRUCTION COMPRESSION FRAMEWORK BASED ON PNG IMAGEMULTIPLE RECONSTRUCTION COMPRESSION FRAMEWORK BASED ON PNG IMAGE
MULTIPLE RECONSTRUCTION COMPRESSION FRAMEWORK BASED ON PNG IMAGEijcsity
 
Image compression using discrete wavelet transform
Image compression using discrete wavelet transformImage compression using discrete wavelet transform
Image compression using discrete wavelet transformHarshal Ladhe
 
Halftoning-based BTC image reconstruction using patch processing with border ...
Halftoning-based BTC image reconstruction using patch processing with border ...Halftoning-based BTC image reconstruction using patch processing with border ...
Halftoning-based BTC image reconstruction using patch processing with border ...TELKOMNIKA JOURNAL
 
Digital Image Compression using Hybrid Transform with Kekre Transform and Oth...
Digital Image Compression using Hybrid Transform with Kekre Transform and Oth...Digital Image Compression using Hybrid Transform with Kekre Transform and Oth...
Digital Image Compression using Hybrid Transform with Kekre Transform and Oth...IOSR Journals
 
Effect of Block Sizes on the Attributes of Watermarking Digital Images
Effect of Block Sizes on the Attributes of Watermarking Digital ImagesEffect of Block Sizes on the Attributes of Watermarking Digital Images
Effect of Block Sizes on the Attributes of Watermarking Digital ImagesDr. Michael Agbaje
 
Improved anti-noise attack ability of image encryption algorithm using de-noi...
Improved anti-noise attack ability of image encryption algorithm using de-noi...Improved anti-noise attack ability of image encryption algorithm using de-noi...
Improved anti-noise attack ability of image encryption algorithm using de-noi...TELKOMNIKA JOURNAL
 
ROI Based Image Compression in Baseline JPEG
ROI Based Image Compression in Baseline JPEGROI Based Image Compression in Baseline JPEG
ROI Based Image Compression in Baseline JPEGIJERA Editor
 
A spatial image compression algorithm based on run length encoding
A spatial image compression algorithm based on run length encodingA spatial image compression algorithm based on run length encoding
A spatial image compression algorithm based on run length encodingjournalBEEI
 
AN ENHANCED SEPARABLE REVERSIBLE DATA HIDING IN ENCRYPTED IMAGES USING SIDE M...
AN ENHANCED SEPARABLE REVERSIBLE DATA HIDING IN ENCRYPTED IMAGES USING SIDE M...AN ENHANCED SEPARABLE REVERSIBLE DATA HIDING IN ENCRYPTED IMAGES USING SIDE M...
AN ENHANCED SEPARABLE REVERSIBLE DATA HIDING IN ENCRYPTED IMAGES USING SIDE M...Editor IJMTER
 
Comparison between JPEG(DCT) and JPEG 2000(DWT) compression standards
Comparison between JPEG(DCT) and JPEG 2000(DWT) compression standardsComparison between JPEG(DCT) and JPEG 2000(DWT) compression standards
Comparison between JPEG(DCT) and JPEG 2000(DWT) compression standardsRishab2612
 
Survey paper on image compression techniques
Survey paper on image compression techniquesSurvey paper on image compression techniques
Survey paper on image compression techniquesIRJET Journal
 
An Algorithm for Improving the Quality of Compacted JPEG Image by Minimizes t...
An Algorithm for Improving the Quality of Compacted JPEG Image by Minimizes t...An Algorithm for Improving the Quality of Compacted JPEG Image by Minimizes t...
An Algorithm for Improving the Quality of Compacted JPEG Image by Minimizes t...ijcga
 

What's hot (17)

A Comparative Study of Image Compression Algorithms
A Comparative Study of Image Compression AlgorithmsA Comparative Study of Image Compression Algorithms
A Comparative Study of Image Compression Algorithms
 
GPU_Based_Image_Compression_and_Interpolation_with_Anisotropic_Diffusion
GPU_Based_Image_Compression_and_Interpolation_with_Anisotropic_DiffusionGPU_Based_Image_Compression_and_Interpolation_with_Anisotropic_Diffusion
GPU_Based_Image_Compression_and_Interpolation_with_Anisotropic_Diffusion
 
Image compression using Hybrid wavelet Transform and their Performance Compa...
Image compression using Hybrid wavelet Transform and their  Performance Compa...Image compression using Hybrid wavelet Transform and their  Performance Compa...
Image compression using Hybrid wavelet Transform and their Performance Compa...
 
Ceis 4
Ceis 4Ceis 4
Ceis 4
 
DCT image compression
DCT image compressionDCT image compression
DCT image compression
 
MULTIPLE RECONSTRUCTION COMPRESSION FRAMEWORK BASED ON PNG IMAGE
MULTIPLE RECONSTRUCTION COMPRESSION FRAMEWORK BASED ON PNG IMAGEMULTIPLE RECONSTRUCTION COMPRESSION FRAMEWORK BASED ON PNG IMAGE
MULTIPLE RECONSTRUCTION COMPRESSION FRAMEWORK BASED ON PNG IMAGE
 
Image compression using discrete wavelet transform
Image compression using discrete wavelet transformImage compression using discrete wavelet transform
Image compression using discrete wavelet transform
 
Halftoning-based BTC image reconstruction using patch processing with border ...
Halftoning-based BTC image reconstruction using patch processing with border ...Halftoning-based BTC image reconstruction using patch processing with border ...
Halftoning-based BTC image reconstruction using patch processing with border ...
 
Digital Image Compression using Hybrid Transform with Kekre Transform and Oth...
Digital Image Compression using Hybrid Transform with Kekre Transform and Oth...Digital Image Compression using Hybrid Transform with Kekre Transform and Oth...
Digital Image Compression using Hybrid Transform with Kekre Transform and Oth...
 
Effect of Block Sizes on the Attributes of Watermarking Digital Images
Effect of Block Sizes on the Attributes of Watermarking Digital ImagesEffect of Block Sizes on the Attributes of Watermarking Digital Images
Effect of Block Sizes on the Attributes of Watermarking Digital Images
 
Improved anti-noise attack ability of image encryption algorithm using de-noi...
Improved anti-noise attack ability of image encryption algorithm using de-noi...Improved anti-noise attack ability of image encryption algorithm using de-noi...
Improved anti-noise attack ability of image encryption algorithm using de-noi...
 
ROI Based Image Compression in Baseline JPEG
ROI Based Image Compression in Baseline JPEGROI Based Image Compression in Baseline JPEG
ROI Based Image Compression in Baseline JPEG
 
A spatial image compression algorithm based on run length encoding
A spatial image compression algorithm based on run length encodingA spatial image compression algorithm based on run length encoding
A spatial image compression algorithm based on run length encoding
 
AN ENHANCED SEPARABLE REVERSIBLE DATA HIDING IN ENCRYPTED IMAGES USING SIDE M...
AN ENHANCED SEPARABLE REVERSIBLE DATA HIDING IN ENCRYPTED IMAGES USING SIDE M...AN ENHANCED SEPARABLE REVERSIBLE DATA HIDING IN ENCRYPTED IMAGES USING SIDE M...
AN ENHANCED SEPARABLE REVERSIBLE DATA HIDING IN ENCRYPTED IMAGES USING SIDE M...
 
Comparison between JPEG(DCT) and JPEG 2000(DWT) compression standards
Comparison between JPEG(DCT) and JPEG 2000(DWT) compression standardsComparison between JPEG(DCT) and JPEG 2000(DWT) compression standards
Comparison between JPEG(DCT) and JPEG 2000(DWT) compression standards
 
Survey paper on image compression techniques
Survey paper on image compression techniquesSurvey paper on image compression techniques
Survey paper on image compression techniques
 
An Algorithm for Improving the Quality of Compacted JPEG Image by Minimizes t...
An Algorithm for Improving the Quality of Compacted JPEG Image by Minimizes t...An Algorithm for Improving the Quality of Compacted JPEG Image by Minimizes t...
An Algorithm for Improving the Quality of Compacted JPEG Image by Minimizes t...
 

Similar to Intelligent Parallel Processing and Compound Image Compression

A Review on Image Compression using DCT and DWT
A Review on Image Compression using DCT and DWTA Review on Image Compression using DCT and DWT
A Review on Image Compression using DCT and DWTIJSRD
 
Pipelined Architecture of 2D-DCT, Quantization and ZigZag Process for JPEG Im...
Pipelined Architecture of 2D-DCT, Quantization and ZigZag Process for JPEG Im...Pipelined Architecture of 2D-DCT, Quantization and ZigZag Process for JPEG Im...
Pipelined Architecture of 2D-DCT, Quantization and ZigZag Process for JPEG Im...VLSICS Design
 
A COMPARATIVE STUDY OF IMAGE COMPRESSION ALGORITHMS
A COMPARATIVE STUDY OF IMAGE COMPRESSION ALGORITHMSA COMPARATIVE STUDY OF IMAGE COMPRESSION ALGORITHMS
A COMPARATIVE STUDY OF IMAGE COMPRESSION ALGORITHMSKate Campbell
 
Medical Image Compression using DCT with Entropy Encoding and Huffman on MRI ...
Medical Image Compression using DCT with Entropy Encoding and Huffman on MRI ...Medical Image Compression using DCT with Entropy Encoding and Huffman on MRI ...
Medical Image Compression using DCT with Entropy Encoding and Huffman on MRI ...Associate Professor in VSB Coimbatore
 
Iaetsd performance analysis of discrete cosine
Iaetsd performance analysis of discrete cosineIaetsd performance analysis of discrete cosine
Iaetsd performance analysis of discrete cosineIaetsd Iaetsd
 
Comparative Study between DCT and Wavelet Transform Based Image Compression A...
Comparative Study between DCT and Wavelet Transform Based Image Compression A...Comparative Study between DCT and Wavelet Transform Based Image Compression A...
Comparative Study between DCT and Wavelet Transform Based Image Compression A...IOSR Journals
 
BIG DATA-DRIVEN FAST REDUCING THE VISUAL BLOCK ARTIFACTS OF DCT COMPRESSED IM...
BIG DATA-DRIVEN FAST REDUCING THE VISUAL BLOCK ARTIFACTS OF DCT COMPRESSED IM...BIG DATA-DRIVEN FAST REDUCING THE VISUAL BLOCK ARTIFACTS OF DCT COMPRESSED IM...
BIG DATA-DRIVEN FAST REDUCING THE VISUAL BLOCK ARTIFACTS OF DCT COMPRESSED IM...IJDKP
 
Wavelet based Image Coding Schemes: A Recent Survey
Wavelet based Image Coding Schemes: A Recent Survey  Wavelet based Image Coding Schemes: A Recent Survey
Wavelet based Image Coding Schemes: A Recent Survey ijsc
 
International Journal on Soft Computing ( IJSC )
International Journal on Soft Computing ( IJSC )International Journal on Soft Computing ( IJSC )
International Journal on Soft Computing ( IJSC )ijsc
 
Jpeg image compression using discrete cosine transform a survey
Jpeg image compression using discrete cosine transform   a surveyJpeg image compression using discrete cosine transform   a survey
Jpeg image compression using discrete cosine transform a surveyIJCSES Journal
 
An efficient image compression algorithm using dct biorthogonal wavelet trans...
An efficient image compression algorithm using dct biorthogonal wavelet trans...An efficient image compression algorithm using dct biorthogonal wavelet trans...
An efficient image compression algorithm using dct biorthogonal wavelet trans...eSAT Journals
 
REGION OF INTEREST BASED COMPRESSION OF MEDICAL IMAGE USING DISCRETE WAVELET ...
REGION OF INTEREST BASED COMPRESSION OF MEDICAL IMAGE USING DISCRETE WAVELET ...REGION OF INTEREST BASED COMPRESSION OF MEDICAL IMAGE USING DISCRETE WAVELET ...
REGION OF INTEREST BASED COMPRESSION OF MEDICAL IMAGE USING DISCRETE WAVELET ...ijcsa
 
Review of Diverse Techniques Used for Effective Fractal Image Compression
Review of Diverse Techniques Used for Effective Fractal Image CompressionReview of Diverse Techniques Used for Effective Fractal Image Compression
Review of Diverse Techniques Used for Effective Fractal Image CompressionIRJET Journal
 
Selective image encryption using
Selective image encryption usingSelective image encryption using
Selective image encryption usingcsandit
 
SELECTIVE IMAGE ENCRYPTION USING DCT WITH AES CIPHER
SELECTIVE IMAGE ENCRYPTION USING DCT WITH AES CIPHERSELECTIVE IMAGE ENCRYPTION USING DCT WITH AES CIPHER
SELECTIVE IMAGE ENCRYPTION USING DCT WITH AES CIPHERcscpconf
 
D0325016021
D0325016021D0325016021
D0325016021theijes
 
A REVIEW ON IMAGE COMPRESSION USING HALFTONING BASED BTC
A  REVIEW ON IMAGE COMPRESSION USING HALFTONING BASED BTC A  REVIEW ON IMAGE COMPRESSION USING HALFTONING BASED BTC
A REVIEW ON IMAGE COMPRESSION USING HALFTONING BASED BTC ijcsity
 

Similar to Intelligent Parallel Processing and Compound Image Compression (20)

A Review on Image Compression using DCT and DWT
A Review on Image Compression using DCT and DWTA Review on Image Compression using DCT and DWT
A Review on Image Compression using DCT and DWT
 
Pipelined Architecture of 2D-DCT, Quantization and ZigZag Process for JPEG Im...
Pipelined Architecture of 2D-DCT, Quantization and ZigZag Process for JPEG Im...Pipelined Architecture of 2D-DCT, Quantization and ZigZag Process for JPEG Im...
Pipelined Architecture of 2D-DCT, Quantization and ZigZag Process for JPEG Im...
 
www.ijerd.com
www.ijerd.comwww.ijerd.com
www.ijerd.com
 
A COMPARATIVE STUDY OF IMAGE COMPRESSION ALGORITHMS
A COMPARATIVE STUDY OF IMAGE COMPRESSION ALGORITHMSA COMPARATIVE STUDY OF IMAGE COMPRESSION ALGORITHMS
A COMPARATIVE STUDY OF IMAGE COMPRESSION ALGORITHMS
 
Medical Image Compression using DCT with Entropy Encoding and Huffman on MRI ...
Medical Image Compression using DCT with Entropy Encoding and Huffman on MRI ...Medical Image Compression using DCT with Entropy Encoding and Huffman on MRI ...
Medical Image Compression using DCT with Entropy Encoding and Huffman on MRI ...
 
Iaetsd performance analysis of discrete cosine
Iaetsd performance analysis of discrete cosineIaetsd performance analysis of discrete cosine
Iaetsd performance analysis of discrete cosine
 
Comparative Study between DCT and Wavelet Transform Based Image Compression A...
Comparative Study between DCT and Wavelet Transform Based Image Compression A...Comparative Study between DCT and Wavelet Transform Based Image Compression A...
Comparative Study between DCT and Wavelet Transform Based Image Compression A...
 
I017125357
I017125357I017125357
I017125357
 
B017120611
B017120611B017120611
B017120611
 
BIG DATA-DRIVEN FAST REDUCING THE VISUAL BLOCK ARTIFACTS OF DCT COMPRESSED IM...
BIG DATA-DRIVEN FAST REDUCING THE VISUAL BLOCK ARTIFACTS OF DCT COMPRESSED IM...BIG DATA-DRIVEN FAST REDUCING THE VISUAL BLOCK ARTIFACTS OF DCT COMPRESSED IM...
BIG DATA-DRIVEN FAST REDUCING THE VISUAL BLOCK ARTIFACTS OF DCT COMPRESSED IM...
 
Wavelet based Image Coding Schemes: A Recent Survey
Wavelet based Image Coding Schemes: A Recent Survey  Wavelet based Image Coding Schemes: A Recent Survey
Wavelet based Image Coding Schemes: A Recent Survey
 
International Journal on Soft Computing ( IJSC )
International Journal on Soft Computing ( IJSC )International Journal on Soft Computing ( IJSC )
International Journal on Soft Computing ( IJSC )
 
Jpeg image compression using discrete cosine transform a survey
Jpeg image compression using discrete cosine transform   a surveyJpeg image compression using discrete cosine transform   a survey
Jpeg image compression using discrete cosine transform a survey
 
An efficient image compression algorithm using dct biorthogonal wavelet trans...
An efficient image compression algorithm using dct biorthogonal wavelet trans...An efficient image compression algorithm using dct biorthogonal wavelet trans...
An efficient image compression algorithm using dct biorthogonal wavelet trans...
 
REGION OF INTEREST BASED COMPRESSION OF MEDICAL IMAGE USING DISCRETE WAVELET ...
REGION OF INTEREST BASED COMPRESSION OF MEDICAL IMAGE USING DISCRETE WAVELET ...REGION OF INTEREST BASED COMPRESSION OF MEDICAL IMAGE USING DISCRETE WAVELET ...
REGION OF INTEREST BASED COMPRESSION OF MEDICAL IMAGE USING DISCRETE WAVELET ...
 
Review of Diverse Techniques Used for Effective Fractal Image Compression
Review of Diverse Techniques Used for Effective Fractal Image CompressionReview of Diverse Techniques Used for Effective Fractal Image Compression
Review of Diverse Techniques Used for Effective Fractal Image Compression
 
Selective image encryption using
Selective image encryption usingSelective image encryption using
Selective image encryption using
 
SELECTIVE IMAGE ENCRYPTION USING DCT WITH AES CIPHER
SELECTIVE IMAGE ENCRYPTION USING DCT WITH AES CIPHERSELECTIVE IMAGE ENCRYPTION USING DCT WITH AES CIPHER
SELECTIVE IMAGE ENCRYPTION USING DCT WITH AES CIPHER
 
D0325016021
D0325016021D0325016021
D0325016021
 
A REVIEW ON IMAGE COMPRESSION USING HALFTONING BASED BTC
A  REVIEW ON IMAGE COMPRESSION USING HALFTONING BASED BTC A  REVIEW ON IMAGE COMPRESSION USING HALFTONING BASED BTC
A REVIEW ON IMAGE COMPRESSION USING HALFTONING BASED BTC
 

More from DR.P.S.JAGADEESH KUMAR

Panchromatic and Multispectral Remote Sensing Image Fusion Using Particle Swa...
Panchromatic and Multispectral Remote Sensing Image Fusion Using Particle Swa...Panchromatic and Multispectral Remote Sensing Image Fusion Using Particle Swa...
Panchromatic and Multispectral Remote Sensing Image Fusion Using Particle Swa...DR.P.S.JAGADEESH KUMAR
 
Bi-directional Recurrent Neural Networks in Classifying Dementia, Alzheimer’s...
Bi-directional Recurrent Neural Networks in Classifying Dementia, Alzheimer’s...Bi-directional Recurrent Neural Networks in Classifying Dementia, Alzheimer’s...
Bi-directional Recurrent Neural Networks in Classifying Dementia, Alzheimer’s...DR.P.S.JAGADEESH KUMAR
 
Promise and Risks Tangled in Hybrid Wavelet Medical Image Fusion Using Firefl...
Promise and Risks Tangled in Hybrid Wavelet Medical Image Fusion Using Firefl...Promise and Risks Tangled in Hybrid Wavelet Medical Image Fusion Using Firefl...
Promise and Risks Tangled in Hybrid Wavelet Medical Image Fusion Using Firefl...DR.P.S.JAGADEESH KUMAR
 
Optical Picbots as a Medicament for Leukemia
Optical Picbots as a Medicament for LeukemiaOptical Picbots as a Medicament for Leukemia
Optical Picbots as a Medicament for LeukemiaDR.P.S.JAGADEESH KUMAR
 
Integrating Medical Robots for Brain Surgical Applications
Integrating Medical Robots for Brain Surgical ApplicationsIntegrating Medical Robots for Brain Surgical Applications
Integrating Medical Robots for Brain Surgical ApplicationsDR.P.S.JAGADEESH KUMAR
 
Automatic Speech Recognition and Machine Learning for Robotic Arm in Surgery
Automatic Speech Recognition and Machine Learning for Robotic Arm in SurgeryAutomatic Speech Recognition and Machine Learning for Robotic Arm in Surgery
Automatic Speech Recognition and Machine Learning for Robotic Arm in SurgeryDR.P.S.JAGADEESH KUMAR
 
Continuous and Discrete Crooklet Transform
Continuous and Discrete Crooklet TransformContinuous and Discrete Crooklet Transform
Continuous and Discrete Crooklet TransformDR.P.S.JAGADEESH KUMAR
 
A Theoretical Perception of Gravity from the Quantum to the Relativity
A Theoretical Perception of Gravity from the Quantum to the RelativityA Theoretical Perception of Gravity from the Quantum to the Relativity
A Theoretical Perception of Gravity from the Quantum to the RelativityDR.P.S.JAGADEESH KUMAR
 
Advanced Robot Vision for Medical Surgical Applications
Advanced Robot Vision for Medical Surgical ApplicationsAdvanced Robot Vision for Medical Surgical Applications
Advanced Robot Vision for Medical Surgical ApplicationsDR.P.S.JAGADEESH KUMAR
 
Pragmatic Realities on Brain Imaging Techniques and Image Fusion for Alzheime...
Pragmatic Realities on Brain Imaging Techniques and Image Fusion for Alzheime...Pragmatic Realities on Brain Imaging Techniques and Image Fusion for Alzheime...
Pragmatic Realities on Brain Imaging Techniques and Image Fusion for Alzheime...DR.P.S.JAGADEESH KUMAR
 
Intelligent Detection of Glaucoma Using Ballistic Optical Imaging
Intelligent Detection of Glaucoma Using Ballistic Optical ImagingIntelligent Detection of Glaucoma Using Ballistic Optical Imaging
Intelligent Detection of Glaucoma Using Ballistic Optical ImagingDR.P.S.JAGADEESH KUMAR
 
Robotic Simulation of Human Brain Using Convolutional Deep Belief Networks
Robotic Simulation of Human Brain Using Convolutional Deep Belief NetworksRobotic Simulation of Human Brain Using Convolutional Deep Belief Networks
Robotic Simulation of Human Brain Using Convolutional Deep Belief NetworksDR.P.S.JAGADEESH KUMAR
 
Panchromatic and Multispectral Remote Sensing Image Fusion Using Machine Lear...
Panchromatic and Multispectral Remote Sensing Image Fusion Using Machine Lear...Panchromatic and Multispectral Remote Sensing Image Fusion Using Machine Lear...
Panchromatic and Multispectral Remote Sensing Image Fusion Using Machine Lear...DR.P.S.JAGADEESH KUMAR
 
Classification and Evaluation of Macular Edema, Glaucoma and Alzheimer’s Dise...
Classification and Evaluation of Macular Edema, Glaucoma and Alzheimer’s Dise...Classification and Evaluation of Macular Edema, Glaucoma and Alzheimer’s Dise...
Classification and Evaluation of Macular Edema, Glaucoma and Alzheimer’s Dise...DR.P.S.JAGADEESH KUMAR
 
Multilayer Perceptron Neural Network Based Immersive VR System for Cognitive ...
Multilayer Perceptron Neural Network Based Immersive VR System for Cognitive ...Multilayer Perceptron Neural Network Based Immersive VR System for Cognitive ...
Multilayer Perceptron Neural Network Based Immersive VR System for Cognitive ...DR.P.S.JAGADEESH KUMAR
 
Computer Aided Therapeutic of Alzheimer’s Disease Eulogizing Pattern Classifi...
Computer Aided Therapeutic of Alzheimer’s Disease Eulogizing Pattern Classifi...Computer Aided Therapeutic of Alzheimer’s Disease Eulogizing Pattern Classifi...
Computer Aided Therapeutic of Alzheimer’s Disease Eulogizing Pattern Classifi...DR.P.S.JAGADEESH KUMAR
 
Congenital Bucolic and Farming Region Taxonomy Using Neural Networks for Remo...
Congenital Bucolic and Farming Region Taxonomy Using Neural Networks for Remo...Congenital Bucolic and Farming Region Taxonomy Using Neural Networks for Remo...
Congenital Bucolic and Farming Region Taxonomy Using Neural Networks for Remo...DR.P.S.JAGADEESH KUMAR
 
Machine Learning based Retinal Therapeutic for Glaucoma
Machine Learning based Retinal Therapeutic for GlaucomaMachine Learning based Retinal Therapeutic for Glaucoma
Machine Learning based Retinal Therapeutic for GlaucomaDR.P.S.JAGADEESH KUMAR
 
Fingerprint detection and face recognition for colonization control of fronti...
Fingerprint detection and face recognition for colonization control of fronti...Fingerprint detection and face recognition for colonization control of fronti...
Fingerprint detection and face recognition for colonization control of fronti...DR.P.S.JAGADEESH KUMAR
 
New Malicious Attacks on Mobile Banking Applications
New Malicious Attacks on Mobile Banking ApplicationsNew Malicious Attacks on Mobile Banking Applications
New Malicious Attacks on Mobile Banking ApplicationsDR.P.S.JAGADEESH KUMAR
 

More from DR.P.S.JAGADEESH KUMAR (20)

Panchromatic and Multispectral Remote Sensing Image Fusion Using Particle Swa...
Panchromatic and Multispectral Remote Sensing Image Fusion Using Particle Swa...Panchromatic and Multispectral Remote Sensing Image Fusion Using Particle Swa...
Panchromatic and Multispectral Remote Sensing Image Fusion Using Particle Swa...
 
Bi-directional Recurrent Neural Networks in Classifying Dementia, Alzheimer’s...
Bi-directional Recurrent Neural Networks in Classifying Dementia, Alzheimer’s...Bi-directional Recurrent Neural Networks in Classifying Dementia, Alzheimer’s...
Bi-directional Recurrent Neural Networks in Classifying Dementia, Alzheimer’s...
 
Promise and Risks Tangled in Hybrid Wavelet Medical Image Fusion Using Firefl...
Promise and Risks Tangled in Hybrid Wavelet Medical Image Fusion Using Firefl...Promise and Risks Tangled in Hybrid Wavelet Medical Image Fusion Using Firefl...
Promise and Risks Tangled in Hybrid Wavelet Medical Image Fusion Using Firefl...
 
Optical Picbots as a Medicament for Leukemia
Optical Picbots as a Medicament for LeukemiaOptical Picbots as a Medicament for Leukemia
Optical Picbots as a Medicament for Leukemia
 
Integrating Medical Robots for Brain Surgical Applications
Integrating Medical Robots for Brain Surgical ApplicationsIntegrating Medical Robots for Brain Surgical Applications
Integrating Medical Robots for Brain Surgical Applications
 
Automatic Speech Recognition and Machine Learning for Robotic Arm in Surgery
Automatic Speech Recognition and Machine Learning for Robotic Arm in SurgeryAutomatic Speech Recognition and Machine Learning for Robotic Arm in Surgery
Automatic Speech Recognition and Machine Learning for Robotic Arm in Surgery
 
Continuous and Discrete Crooklet Transform
Continuous and Discrete Crooklet TransformContinuous and Discrete Crooklet Transform
Continuous and Discrete Crooklet Transform
 
A Theoretical Perception of Gravity from the Quantum to the Relativity
A Theoretical Perception of Gravity from the Quantum to the RelativityA Theoretical Perception of Gravity from the Quantum to the Relativity
A Theoretical Perception of Gravity from the Quantum to the Relativity
 
Advanced Robot Vision for Medical Surgical Applications
Advanced Robot Vision for Medical Surgical ApplicationsAdvanced Robot Vision for Medical Surgical Applications
Advanced Robot Vision for Medical Surgical Applications
 
Pragmatic Realities on Brain Imaging Techniques and Image Fusion for Alzheime...
Pragmatic Realities on Brain Imaging Techniques and Image Fusion for Alzheime...Pragmatic Realities on Brain Imaging Techniques and Image Fusion for Alzheime...
Pragmatic Realities on Brain Imaging Techniques and Image Fusion for Alzheime...
 
Intelligent Detection of Glaucoma Using Ballistic Optical Imaging
Intelligent Detection of Glaucoma Using Ballistic Optical ImagingIntelligent Detection of Glaucoma Using Ballistic Optical Imaging
Intelligent Detection of Glaucoma Using Ballistic Optical Imaging
 
Robotic Simulation of Human Brain Using Convolutional Deep Belief Networks
Robotic Simulation of Human Brain Using Convolutional Deep Belief NetworksRobotic Simulation of Human Brain Using Convolutional Deep Belief Networks
Robotic Simulation of Human Brain Using Convolutional Deep Belief Networks
 
Panchromatic and Multispectral Remote Sensing Image Fusion Using Machine Lear...
Panchromatic and Multispectral Remote Sensing Image Fusion Using Machine Lear...Panchromatic and Multispectral Remote Sensing Image Fusion Using Machine Lear...
Panchromatic and Multispectral Remote Sensing Image Fusion Using Machine Lear...
 
Classification and Evaluation of Macular Edema, Glaucoma and Alzheimer’s Dise...
Classification and Evaluation of Macular Edema, Glaucoma and Alzheimer’s Dise...Classification and Evaluation of Macular Edema, Glaucoma and Alzheimer’s Dise...
Classification and Evaluation of Macular Edema, Glaucoma and Alzheimer’s Dise...
 
Multilayer Perceptron Neural Network Based Immersive VR System for Cognitive ...
Multilayer Perceptron Neural Network Based Immersive VR System for Cognitive ...Multilayer Perceptron Neural Network Based Immersive VR System for Cognitive ...
Multilayer Perceptron Neural Network Based Immersive VR System for Cognitive ...
 
Computer Aided Therapeutic of Alzheimer’s Disease Eulogizing Pattern Classifi...
Computer Aided Therapeutic of Alzheimer’s Disease Eulogizing Pattern Classifi...Computer Aided Therapeutic of Alzheimer’s Disease Eulogizing Pattern Classifi...
Computer Aided Therapeutic of Alzheimer’s Disease Eulogizing Pattern Classifi...
 
Congenital Bucolic and Farming Region Taxonomy Using Neural Networks for Remo...
Congenital Bucolic and Farming Region Taxonomy Using Neural Networks for Remo...Congenital Bucolic and Farming Region Taxonomy Using Neural Networks for Remo...
Congenital Bucolic and Farming Region Taxonomy Using Neural Networks for Remo...
 
Machine Learning based Retinal Therapeutic for Glaucoma
Machine Learning based Retinal Therapeutic for GlaucomaMachine Learning based Retinal Therapeutic for Glaucoma
Machine Learning based Retinal Therapeutic for Glaucoma
 
Fingerprint detection and face recognition for colonization control of fronti...
Fingerprint detection and face recognition for colonization control of fronti...Fingerprint detection and face recognition for colonization control of fronti...
Fingerprint detection and face recognition for colonization control of fronti...
 
New Malicious Attacks on Mobile Banking Applications
New Malicious Attacks on Mobile Banking ApplicationsNew Malicious Attacks on Mobile Banking Applications
New Malicious Attacks on Mobile Banking Applications
 

Recently uploaded

Arduino_CSE ece ppt for working and principal of arduino.ppt
Arduino_CSE ece ppt for working and principal of arduino.pptArduino_CSE ece ppt for working and principal of arduino.ppt
Arduino_CSE ece ppt for working and principal of arduino.pptSAURABHKUMAR892774
 
complete construction, environmental and economics information of biomass com...
complete construction, environmental and economics information of biomass com...complete construction, environmental and economics information of biomass com...
complete construction, environmental and economics information of biomass com...asadnawaz62
 
Churning of Butter, Factors affecting .
Churning of Butter, Factors affecting  .Churning of Butter, Factors affecting  .
Churning of Butter, Factors affecting .Satyam Kumar
 
Electronically Controlled suspensions system .pdf
Electronically Controlled suspensions system .pdfElectronically Controlled suspensions system .pdf
Electronically Controlled suspensions system .pdfme23b1001
 
Comparative Analysis of Text Summarization Techniques
Comparative Analysis of Text Summarization TechniquesComparative Analysis of Text Summarization Techniques
Comparative Analysis of Text Summarization Techniquesugginaramesh
 
Call Girls Narol 7397865700 Independent Call Girls
Call Girls Narol 7397865700 Independent Call GirlsCall Girls Narol 7397865700 Independent Call Girls
Call Girls Narol 7397865700 Independent Call Girlsssuser7cb4ff
 
8251 universal synchronous asynchronous receiver transmitter
8251 universal synchronous asynchronous receiver transmitter8251 universal synchronous asynchronous receiver transmitter
8251 universal synchronous asynchronous receiver transmitterShivangiSharma879191
 
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdfCCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdfAsst.prof M.Gokilavani
 
Work Experience-Dalton Park.pptxfvvvvvvv
Work Experience-Dalton Park.pptxfvvvvvvvWork Experience-Dalton Park.pptxfvvvvvvv
Work Experience-Dalton Park.pptxfvvvvvvvLewisJB
 
IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024Mark Billinghurst
 
UNIT III ANALOG ELECTRONICS (BASIC ELECTRONICS)
UNIT III ANALOG ELECTRONICS (BASIC ELECTRONICS)UNIT III ANALOG ELECTRONICS (BASIC ELECTRONICS)
UNIT III ANALOG ELECTRONICS (BASIC ELECTRONICS)Dr SOUNDIRARAJ N
 
computer application and construction management
computer application and construction managementcomputer application and construction management
computer application and construction managementMariconPadriquez1
 
Biology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptxBiology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptxDeepakSakkari2
 
INFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETE
INFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETEINFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETE
INFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETEroselinkalist12
 
Past, Present and Future of Generative AI
Past, Present and Future of Generative AIPast, Present and Future of Generative AI
Past, Present and Future of Generative AIabhishek36461
 
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionSachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionDr.Costas Sachpazis
 
Call Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile serviceCall Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile servicerehmti665
 
Introduction to Machine Learning Unit-3 for II MECH
Introduction to Machine Learning Unit-3 for II MECHIntroduction to Machine Learning Unit-3 for II MECH
Introduction to Machine Learning Unit-3 for II MECHC Sai Kiran
 

Recently uploaded (20)

Arduino_CSE ece ppt for working and principal of arduino.ppt
Arduino_CSE ece ppt for working and principal of arduino.pptArduino_CSE ece ppt for working and principal of arduino.ppt
Arduino_CSE ece ppt for working and principal of arduino.ppt
 
complete construction, environmental and economics information of biomass com...
complete construction, environmental and economics information of biomass com...complete construction, environmental and economics information of biomass com...
complete construction, environmental and economics information of biomass com...
 
Churning of Butter, Factors affecting .
Churning of Butter, Factors affecting  .Churning of Butter, Factors affecting  .
Churning of Butter, Factors affecting .
 
Electronically Controlled suspensions system .pdf
Electronically Controlled suspensions system .pdfElectronically Controlled suspensions system .pdf
Electronically Controlled suspensions system .pdf
 
Comparative Analysis of Text Summarization Techniques
Comparative Analysis of Text Summarization TechniquesComparative Analysis of Text Summarization Techniques
Comparative Analysis of Text Summarization Techniques
 
Call Girls Narol 7397865700 Independent Call Girls
Call Girls Narol 7397865700 Independent Call GirlsCall Girls Narol 7397865700 Independent Call Girls
Call Girls Narol 7397865700 Independent Call Girls
 
8251 universal synchronous asynchronous receiver transmitter
8251 universal synchronous asynchronous receiver transmitter8251 universal synchronous asynchronous receiver transmitter
8251 universal synchronous asynchronous receiver transmitter
 
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdfCCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
 
Work Experience-Dalton Park.pptxfvvvvvvv
Work Experience-Dalton Park.pptxfvvvvvvvWork Experience-Dalton Park.pptxfvvvvvvv
Work Experience-Dalton Park.pptxfvvvvvvv
 
IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024
 
UNIT III ANALOG ELECTRONICS (BASIC ELECTRONICS)
UNIT III ANALOG ELECTRONICS (BASIC ELECTRONICS)UNIT III ANALOG ELECTRONICS (BASIC ELECTRONICS)
UNIT III ANALOG ELECTRONICS (BASIC ELECTRONICS)
 
computer application and construction management
computer application and construction managementcomputer application and construction management
computer application and construction management
 
Biology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptxBiology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptx
 
INFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETE
INFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETEINFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETE
INFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETE
 
Past, Present and Future of Generative AI
Past, Present and Future of Generative AIPast, Present and Future of Generative AI
Past, Present and Future of Generative AI
 
🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...
🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...
🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...
 
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
 
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionSachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
 
Call Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile serviceCall Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile service
 
Introduction to Machine Learning Unit-3 for II MECH
Introduction to Machine Learning Unit-3 for II MECHIntroduction to Machine Learning Unit-3 for II MECH
Introduction to Machine Learning Unit-3 for II MECH
 

Intelligent Parallel Processing and Compound Image Compression

  • 1. Available online at www.sciencedirect.com ScienceDirect Procedia Computer Science 38 (2013) 196–205 www.elsevier.com/locate/procedia * Corresponding author. E-mail address: psjkumar.@cam.ac.uk 1743-0905 © 2013 The Authors. Published by Elsevier B.V. Peer-review under the responsibility of Computer Science Education Advances in Parallel Computing New Frontiers in Computing and Communications 2013 Intelligent Parallel Processing and Compound Image Compression P.S.Jagadeesh Kumara* , Krishna Moorthyb a University of Cambridge, United Kingdom b Anna University,Chennai, India Abstract Fashionable and staggering evolution in inferring the parallel processing routine coupled with the necessity to amass and distribute huge magnitude of digital records especially compound images has fetched an amount of confronts for researchers and other stakeholders. These disputes exorbitantly outlays and maneuvers the digital information among others, subsists the spotlight of the research society in topical days and encompassed the lead to the exploration of image compression methods that can accomplish exceptional outcomes. One of those practices is the parallel processing of a variety of compression techniques, which facilitates split, an image into ingredients of reverse occurrences and has the benefit of tremendous compression. This manuscript scrutinizes the computational intricacy and the quantitative optimization of diverse image compression tactics and additional accede to the recital of parallel processing. The computational efficacy is scrutinized and estimated with respect to the Central Processing Unit (CPU) as well as Graphical Processing Unit (GPU). The PSNR (Peak Signal to Noise Ratio) is exercised to guesstimate image re-enactment and eminence in harmonization. The consequences are obtained and conferred support on different image compression algorithms such as Block Truncation Coding (BTC), Discrete Cosine Transform (DCT), Discrete Wavelet Transform (DWT), Dual Tree Complex Wavelet Transform (DTCWT), Set Partitioning in Hierarchical Trees (SPIHT), Embedded Zero-tree Wavelet (EZW). The evaluation is conceded in provisos of coding efficacy, memory constraints, image quantity and quality. Keywords: Computational Paradigm, Quantitative Optimization, Parallel Processing, Compound Image Compression, Performance Evaluation Nomenclature BTC Block Truncation Coding DTCWT Dual Tree Complex Wavelet Transform CPU Central Processing Unit EZW Embedded Zero-tree Wavelet DCT Discrete Cosine Transform GPU Graphical Processing Unit DWT Discrete Wavelet Transform SPIHT Set Partitioning In Hierarchical Trees
  • 2. P.S.Jagadeesh Kumar / Procedia Computer Science 38 (2013) 196–205 197 1. Introduction Digital image compression techniques facilitate to condense the liberty crucial to hoard or broadcast the image statistics by altering the mode by which these images are signified. There are copious schemes for compacting digital image statistics and apiece have its individual benefits and drawbacks. BTC has been exercised for numerous days in compressing digital toneless images. BTC for toneless image is a plain and lossy image compression method. BTC has the benefit of being simple to realize contrast to vector quantization and transform coding. The BTC technique conserves the mean and the standard deviation of block. In the indoctrination process of BTC the image is primarily segmented in to a set of non-overlying blocks, and then the initial two algebraic instants and the bit plane are calculated. In the decoder, every encoded image blocks is re-enacted by means of the bit plane and the two algebraic instants [1] . It accomplishes 2 bpp (bits per pixel) with squat computational paradigm. The substantial quantity of digital information on the web has fetched a number of disputes such as morbidly huge costs in provisos of hoarding and transmitting the information. Consequently, Image compressions engross tumbling the quantity of memory it gets to amass an image in turn to decrease these huge costs. Normally, Image processing is an extremely calculation concentrated assignment. Captivating into deliberation, the image depiction and superiority, schemes or methods for image processing must have scrupulous potential for absolute effects. The hasty growth of digital imaging appliances, counting desktop printing, multimedia, teleconferencing, computer animations and apparition has augmented the necessity for successful image compression practices [2] . 2. Block Truncation Coding Block Truncation Coding is a kind of lossy image compression method for grayscale images. It segregates the unique image into blocks and then employs quantizer to decrease the amount of grey levels in every block at the same time as preserving the identical mean and standard deviation. It is a premature precursor of the fashionable hardware practice, while BTC compression technique was initially tailored to color stretched by means of an analogous advance called Color Compact Compression. BTC has also subsisted to bespoke video compression. BTC was originally anticipated by O.R. Mitchell at Purdue University. A new discrepancy of BTC is Total Instant Block Truncation Coding (TIBTC), wherein as an alternative of utilizing the standard deviation the initial total instant is potted beside the mean. TIBTC is computationally easier than BTC and also classically upshots in a Least Mean Squared Error (LMSE). By means of sub-blocks of 8x8 pixels provides a compression ratio of 2:1 presumptuous to an 8-bit integer values are exercised throughout diffusion or storage. Better blocks consent to superior compression conversely quality also decreases with the augment in block dimension owing to the temperament of the algorithm. A pixel image is segmented into blocks of classically 8x8 pixels [3] . On behalf of every block the Mean and Standard Deviation of the pixel rates are computed; these information usually revolutionize from block to block. The pixel rates chosen for every reformed, block are preferred with the intention that every block of the compressed image will contain the same mean and standard deviation as the consequent block of the unique image. A quantization on the block which increases the compression is achieved as pursues: ( ) { ( ) ̅ ( ) ̅ At this point x (i,j) are pixel ingredients of the unique block and y (i,j) are pixel ingredients of the condensed block. Out loud this can be explicated as: If a pixel value is larger than the mean it is consigned the value "1", or else "0". Significances equal to the mean can have whichever a "1" or a "0" depending on the predilection programmer of the applying the algorithm. This 16 bit block is transmitted down the values of Mean and Standard Deviation. Restoration is prepared with the values "a" and "b" which conserve the mean and the standard deviation as follows: ̅ √ ( )
  • 3. P.S.Jagadeesh Kumar / Procedia Computer Science 38 (2013) 196–205 198 ̅ √( ) Where is the Standard Deviation, m is the overall quantity of pixels in the block and q is the amount of pixels better than the mean ( ̅). To rebuild the image, or generate its ballpark figure, ingredients consigned a “0” are reinstated with the "a" value and ingredients consigned a “1” are reinstated with the "b" value. ( ) { ( ) ( ) This expresses that the method is asymmetric since the encoder has a great deal to do than the decoder. This is for the reason that the decoder is merely restoring 1's and 0's with the projected value while the encoder is obligatory to compute the mean, standard deviation as well as the values a and b. 3. Discrete Cosine Transform Discrete Cosine Transform (DCT) articulates a predetermined series of data points in provisos of a summation of cosine functions vacillating at miscellaneous frequencies. DCTs are significant to frequent functions from lossy compression of multimedia files to spectral techniques for the statistical resolution of partial differential equations. The exercise of cosine more willingly than sine functions is decisive for compression, because it twirls out that fewer cosine functions are desired to estimate an archetypal signal, while for differential equations the cosines articulate a meticulous option of boundary restrictions. Especially, DCT is a Fourier-related transform analogous to the discrete Fourier transform (DFT), however uses only real numbers. DCTs are the same to DFTs of almost double the time taken, working on real data with even proportion, in which several deviations the input and output data are modified by half a section. At hand are eight standard DCT variants, out of which four are widespread. The majority of discrete cosine transform is the type-II DCT, which is frequently entitled just as "the DCT". On contrary, the type-III DCT is likewise entitled as "the inverse DCT". Two correlated transforms are the Discrete Sine Transform (DST), which is the same as a DFT of odd and real functions, and the Modified Discrete Cosine Transform (MDCT), which is predestined on a DCT of overlapping facts. Properly, the discrete cosine transform is a linear, invertible function or consistently an invertible N × N square matrix formulated as; ( ( ) ) ∑ [( | ) ] Further multiplying the x0 and xN-1 stipulations by√ , and in the same way multiplying the x0 and xN-1 stipulations by √ formulates the DCT matrix orthogonal but shatters the unswerving association with a real-even DFT. The DCT-I is precisely the same to a DFT of 2N-2 real numbers with even symmetry. On the other hand, that the DCT-I is not distinct for N less than 2. Therefore, the DCT-I matches to the edge prerequisites: xn is even about n=0 and even about n=N-1; likewise for Xk as described below; ∑ [( | ( ) ] The DCT-II is almost certainly the most usually exploited structure. This transform is precisely the same up to a general scale factor of 2. DFT of 4N real inputs of even symmetry with the even-indexed ingredients are zero. Specifically, it is half of the DFT of the 4N inputs yn, where y2n = 0, y2n+1 = xn for 0 ≤ n < N, y2N = 0, and y4N-n = yn for 0 < n < 2N. Multiplying the x0 stipulations by 1/√ and again multiplying the ensuing matrix by a general scale factor of √ constructs the DCT-II matrix orthogonally, but shatters the unswerving association with a real-even
  • 4. P.S.Jagadeesh Kumar / Procedia Computer Science 38 (2013) 196–205 199 DFT of half-modified input. In numerous applications, for instance JPEG, the scaling is capricious since scale factors can be shared with a consequent computational step and a scaling that can be preferred which lets the DCT to be figured with lesser multiplications. The DCT-II entails the peripheral states: xn is even about n= -1/2 and even about n=N-1/2; Xk is even about k=0 and odd about k=N portrayed as; ∑ [( | ( ))] Since it is the inverse of DCT-II, this structure is now and then termed as "the inverse DCT". Dividing the x0 term by √ in the place of 2 and multiplying the ensuing matrix by a general scale factor of √ in order that the DCT-II and DCT-III are reverse of each other [4] . This constructs the DCT-III matrix orthogonal, but shatters the straight association with a real-even DFT of partially modified output. The DCT-III involves the edge states: xn is even about n=0 and odd about n=N; Xk is even about k=-1/2 and odd about k=N-1/2. ∑ [( | ( )( ))] The DCT-IV matrix befalls orthogonal if it is further multiplied by a general scale factor of √ . A deviation of the DCT-IV, wherever the information from diverse transforms is extending beyond, referred as the “MDCT”. The DCT-IV entails the edge stipulations: xn is even about n= -1/2 and odd about n=N-1/2; likewise for Xk. DCTs of natures I-IV care for both precincts constantly concerning the point of symmetry: they are even/odd about whichever data point for both precincts or intermediate amid the two data points for both precincts. Distinctly, DCTs of kinds V-VIII entails precincts that are even/odd about a data point for each boundary and intermediate amid two data points for the further periphery. Conversely, DCT types I-IV are corresponding to real-even DFTs of even order since the equivalent DFT is of length 2(N−1) (for DCT-I) or 4N (for DCT-II/III) or 8N (for DCT-IV). The four supplementary kinds of Discrete Cosine Transform match effectively to real-even DFTs of rationally odd sequence, which have aspects of N±½ in the denominators of the cosine rows. Nevertheless, these deviations appear to be hardly ever exercised in observe. One basis, perchance, is that FFT algorithms for odd-length DFTs are normally more convoluted than FFT algorithms for even-length DFTs, and these augmented obscurities clutch the DCTs. 4. Discrete Wavelet Transform In statistical inquiry and practical inspection, a discrete wavelet transform (DWT) is any wavelet transform for which the wavelets are disjointedly illustrated. Because with wavelet transforms, the chief improvement over Fourier transforms is its temporal resolution: it incarcerates both incidence and position information. The discrete wavelet transform has a vast application in science and engineering. Conspicuously, it is second-hand for signal coding, to signify a discrete signal in an additional superfluous structure, frequently as a prerequisite for data compression. Realistic functions can also be established in signal processing of hastening for gait investigation, in digital communications and a lot of further. It is revealed that DWT is productively realized as analog filter bank in biomedical processing for devise of pacemakers and as well in ultra-wideband (UWB) wireless communications. The DWT of a signal x is computed by ephemeral a sequence of filters. Primarily the trials are conceded through a low pass filter with inclination „g‟ ensuing in a convolution as underlined; [ ] ( )[ ] ∑ [ ] [ ]
  • 5. P.S.Jagadeesh Kumar / Procedia Computer Science 38 (2013) 196–205 200 The signal is also festered concurrently exploiting a high-pass filter h. The production provides the aspect coefficients from the high-pass filter and estimate coefficients from the low-pass filter. It is significant that the two filters are interrelated and they are recognized as a quadrature mirror filter [5] . 5. Dual Tree Complex Wavelet Transform The Complex Wavelet Transform (CWT) is comparatively a topical reinforcement to the DWT, with significantly extra chattels: It is almost shift invariant and directionally discerning in higher dimensions. It attains this with 2d redundancy factor of considerably inferior to that of DWT. The multidimensional dual-tree CWT is non-discrete but supported by a computationally proficient, discrete filter bank. CWT is a complex valued conservatory to the typical DWT. It is a 2D wavelet transform which gives multi-resolution, light depiction, and practical description of the configuration of an image. Additionally, it provides a high magnitude of shift-invariance in its dimension. On the other hand, a downside to this transform is that it shows 2d redundancy compared to a separable DWT, where d is the dimension of the signal distorted. The exploitation of complex wavelets was initially set up by J.M. Lina and L. Gagnon in the construction of the Daubechies orthogonal filters banks in 1995. It was then widespread by Prof. Nick Kingsbury of Cambridge University in 1997. In computer vision, by developing the outset of visual perspectives, one can rapidly spotlight on contender sections, where objects of interest may be established, and then calculate the added traits through the CWT. These added characteristics, while not obligatory for comprehensive sections, are practical in precise recognition and appreciation of less significant objects. Likewise, the CWT may be functional in identifying the triggered voxels of cortex and moreover the temporal independent component analysis may be developed to remove the fundamental autonomous springs whose figure is strong-minded by Bayesian information standard. The Dual-tree complex wavelet transform (DTCWT) estimates the complex transform of a gesture by means of the two disconnect DWT putrefactions tree “a” and tree “b” as shown in Fig.1. If the filters employed in one are exclusively intended dissimilar from those in the auxiliary is probable for one DWT to construct the real coefficients and the further the imaginary. This idleness of two gives additional data for investigation but at the outlay of superfluous computational supremacy [6] . It also affords ballpark shift-invariance hitherto still permits ideal re-enactment of the gesture. The devise of the filters is predominantly significant for the transform to transpire fittingly and the essential individualities are: (1) The low-pass filters in the two trees ought to diverge by half a trial interlude (2) Re-enactment filters are the repeal of investigation (3) All filters are from the similar orthonormal set (4) Tree “a” filters are the overturn of tree “b” filters (5) Both trees have the similar incidence rejoinder Fig. 1. Block diagram for a 3-level DTCWT.
  • 6. P.S.Jagadeesh Kumar / Procedia Computer Science 38 (2013) 196–205 201 6. Embedded Zero-tree Wavelet Transforms Embedded Zero-tree of Wavelet transforms (EZW) is a lossy image compression technique. At high compression ratios, the majority of the coefficients fashioned by a sub-band transform for instance, the wavelet transform resolve to be zero, or extremely close to zero. This happens since "real world" images are inclined to enclose typically low frequency information that is extremely interrelated. Nevertheless, high frequency information does happen such as boundaries in the image this is predominantly significant in stipulation of human insight of the image excellence, and thus must be symbolized precisely in any high eminent coding system. By making an allowance for the distorted coefficients as a trees with the lowest incidence coefficients at the origin node and with the offspring of every tree node being the spatially associated coefficients in the subsequent higher frequency sub-band, present is a high likelihood that one or more sub-trees will include exclusive coefficients which are zero or almost zero, such sub- trees are called zero-trees. Owing to this, the terms node and coefficient exchange-ably are exercised, and referring to the offspring of a coefficient, it means the child coefficients of the node in the tree where that coefficient is situated. Offspring refers to straightly associated nodes inferior in the tree and a descendant refers to all nodes which are beneath a meticulous node in the tree, yet if not straightly associated. In zero-tree based image compression system such as EZW and SPIHT, the intention is to employ the algebraic chattels of the trees in turn to competently code the positions of the momentous coefficients. Because the majority of the coefficients will be zero or almost zero, the spatial positions of the important coefficients structure a huge section of the entire amount of a distinctive compressed image. A coefficient is measured momentous if its scale or magnitudes of a node and its entire offspring in the case of a tree is on top of a scrupulous threshold. By preparing with a threshold which is utmost coefficient magnitudes and iteratively declining the threshold, it is probable to generate a compressed depiction of an image which gradually adjoins better features. Owing to the constitution of the trees, it is expected that if a coefficient in a meticulous incidence group is inconsequential, then its entire offspring including the spatially associated higher frequency band coefficients will also be inconsequential. EZW employs four signs to symbolize (a) a zero-tree origin, (b) a remote zero coefficient which is inconsequential, but which has momentous offspring, (c) momentous optimistic coefficient and (d) momentous pessimistic coefficient. The signs may be thus characterized by two binary bits. The compression method includes a number of iterations from side to side a prevailing pass and a subsidiary pass, the threshold is rationalized by a factor of two. The prevailing pass encodes the connotation of the coefficients which have not hitherto been established momentous in previous iterations, by scrutinizing the trees and emanating one of the four signs. The offspring of a coefficient are only scrutinized if the coefficient is established to be momentous, or if the coefficient was a remote zero. The subsidiary pass emanates one bit, the most important bit of each coefficient that are not so far emanated for every coefficient which has been established momentous in the preceding connotation pass. The subsidiary pass is hence analogous to bit-plane coding. There are numerous significant attributes to note. Primarily, it is probable to impede the compression procedure at any instance and acquire a ballpark figure of the unique image, the better the number of bits acknowledged, the superior is the image. Secondly, owing to the means in which the compression procedure is controlled as a sequence of resolutions, the similar procedure can be sprint at the decoder to rebuild the coefficients, but with the resolutions being considered in accordance with the inward bit stream. In realistic performance, it would be customary to employ an entropy code such as arithmetic code to additionally enhance the recital of the prevailing pass. Bits from the subsidiary pass are typically haphazard that the entropy coding does not afford additional coding gain [7] . The coding recital of EZW has ever since been surpassed by SPIHT and its numerous plagiaristic. The EZW procedure also holds the following aspects: (1) Discrete wavelets transform can utilize a condensed multi-resolution depiction in the image. (2) Zero tree coding offers an opaque multi-resolution depiction of connotation plot. (3) Consecutive estimation for a condensed multi-precision depiction of the momentous coefficients. (4) Prioritization etiquette is strong-minded by the exactitude, extent, balance, and spatial position of the wavelet coefficients. (5) Modified multilevel arithmetic coding for entropy coding the sequence of signs.
  • 7. P.S.Jagadeesh Kumar / Procedia Computer Science 38 (2013) 196–205 202 7. Set Partitioning in Hierarchical Trees Set partitioning in hierarchical trees (SPIHT) is an image compression technique that develops the intrinsic resemblance athwart the sub-bands in a wavelet decomposition of an image. The method codes mainly the imperative wavelet transform coefficients initially, and broadcasts the bits so that a progressively more sophisticated facsimile of the unique image can be attained gradually. SPIHT algorithm is supported on embedded zero tree wavelet (EZW) coding technique; it utilizes spatial orientation trees and employs set partition sorting procedure. Coefficients analogous to the identical spatial position in diverse sub bands in the pyramid composition exhibit self- similar distinctiveness. SPIHT describes parent children relationships amid these self-similar sub-bands to ascertain spatial orientation trees [8] . Step1: In the classification pass, the List of Insignificant Pixel (LIP) is scrutinized to decide if ingress is momentous at the existing threshold. If ingress is established to be considerable, output a bit „1‟ and any more bit for the precursor of the coefficient, which is manifest by “1” for optimistic or “0” for pessimistic. Now the momentous ingress is enthused to the list of significant pixel (LSP). Step2: Ingress in List of Insignificant Set (LIS) is practiced. When ingress is the deposit of all offspring of a coefficient, named „type 1‟, extent checks for all offspring‟s of the existing ingress are conceded to fix on if they are momentous or not. If the ingress is established to be momentous, the straight offspring‟s of the ingress endures extent examinations. If straight offspring is momentous, it is enthused into LIP; or else it is enthused into LSP. If the ingress is considered to be inconsequential, this spatial orientation tree entrenched by the existing ingress was a zero-tree, thus a bit „0‟ is output and no more dispensation is required. Lastly, this ingress is enthused to the conclusion of LIS as „type 2‟, which is the set of all offspring apart from for the instantaneous offspring of a coefficient. If the ingress in LIS is type 2, connotation examination is carried out on the offspring of its straight offspring. If connotation examination is true, the spatial orientation tree with origin of type 2 ingress is divided into four sub-trees that are entrenched by the straight offspring and these straight offspring‟s are added in the last part of LIS as type 1 ingression. The significant obsession in LIS categorization is that complete deposits of inconsequential coefficients, zero-trees, are symbolized with a single zero. The reason behind defining spatial parent-children relationships is to enhance the likelihood of discovering these zero-trees. Step3: Lastly, modification pass is exploited to yield the modification bits (nth bit) of the coefficients in LSP at existing threshold. Prior the algorithm progress to the subsequent round, the existing threshold is bisected. 8. Computational Paradigm and Quantitative Optimization Nevertheless, the undeviating function would necessitate O(N2 ) operations; it is probable to calculate the same with only O(N log N) intricacy by factorizing the calculation likewise to the Fast Fourier transform (FFT). It is possible to calculate DCTs via FFTs shared with O(N) pre-processing and post-processing stepladder. In common, O(N log N) methods to calculate DCTs are recognized as Fast Cosine Transform (FCT) algorithms. The most competent algorithms, in standard, are typically those that are focused directly for the DCT, as divergent to employ a normal FFT in addition with O(N) further operations. Conversely, even "dedicated" DCT algorithms including all of those that attain the least known arithmetic counts, as a minimum for power of two sizes are classically intimately associated to FFT algorithms; since DCTs are basically DFTs of real-even data, it is oblige to devise a fast DCT algorithm by captivating an FFT and removing the redundant operations owing to this equilibrium. Algorithms based on the Cooley–Tukey FFT algorithm are most familiar; however other FFT algorithm is also appropriate. For instance, the Winograd FFT algorithm guides to minimal-multiplication algorithms for the DFT, even though usually at the outlay of further superfluities, and an analogous algorithm was anticipated by Feig and Winograd for the DCT. Since the procedures for DFTs, DCTs, and related transforms are all so intimately associated, any development in algorithms for one transform will tentatively guide to instant gains for the other transforms as well. Whilst DCT algorithms that utilize an original FFT frequently have several conjectural overhead contrasts to the most specific DCT algorithms, the earlier also have a discrete benefit: extremely optimized FFT series are
  • 8. P.S.Jagadeesh Kumar / Procedia Computer Science 38 (2013) 196–205 203 extensively obtainable. Therefore, put into practice, it is frequently simpler to attain high recital for general lengths N with FFT-based algorithms. Recital on recent hardware is characteristically not subjugated merely by arithmetic reckons, and optimization necessitates considerable engineering endeavor. Dedicated DCT algorithms, alternatively, perceive extensive employ for transforms of little, preset sizes such as the 8x8 DCT-II worned in JPEG compression, or the small DCTs or MDCTs normally exercised in audio compression. Abridged code size may also be a cause to employ a dedicated DCT for embedded purposes. Actually, the DCT algorithms employing a regular FFT are occasionally alike to trimming the superfluous procedures from a better FFT of real-symmetric data, and they can still be optimal from the perception of arithmetic counts. For instance, a type-II DCT is the same to a DFT of size 4N with real-even symmetry whose even-indexed rudiments are zero. The most general technique for calculating this is by means of an FFT and this technique in retrospect can be perceived as one step of a radix-4 Cooley–Tukey algorithm realistic to the "rational" real-even DFT analogous to the DCT II. The radix-4 stride eases the magnitude 4N DFT to four size-N DFTs of real statistics, two of which are zero and two of which are equivalent to one another by the constant equilibrium, therefore profuse a single size-N FFT of real data in addition to O(N) operations. Since the even-indexed rudiments are zero, this radix-4 step is unerringly similar to a split-radix stride; if the consequent size-N real-data FFT is also achieved by a real-data split-radix algorithm, then the ensuing algorithm in reality bouts the lowest available arithmetic count for the DCT-II (2N log2 N – N + 2). Consequently, there is nothing inherently awful about computing the DCT by means of an FFT from an arithmetic viewpoint; it is every so often simply an inquiry of whether the analogous FFT algorithm is optimal. The filter bank realization of the Discrete Wavelet Transform takes only O(N) in definite cases, as compared to O(N log N) for the fast Fourier transform. The wavelet filter bank does each of these two O(N) convolutions, then divides the signal into two twigs of size N/2. However, it only recursively splits the upper branch convolved with y[n] as distinguished with the FFT, which recursively splits together the superior branch and the inferior branch. This leads to the following which directs to an O(N) time for the complete process, as can be revealed by a geometric series expansion of the relation; ( ) ( ) 9. Parallel Processing Performance In the midst of the development in computer expertise, nowadays multiple cores are accessible in personal computers, yet Graphics Processing Unit also exists, which can be second-hand for parallel computing. Multi-core processing unit is a structural design in which multiple processors are wrapped up into solitary integrated circuit by means of core logic, which includes a number of high amateur dramatic cores [9] . GPU is an electronic circuit including hundreds of reasonably performing mainstay cores. It can be interleaved in the PCI slit. High Performance Computing (HPC) is a collective computational supremacy, which would distribute much privileged recital than one might obtain from a distinctive desktop computer or workplace to resolve better tribulations in engineering and commercial needs. High Performance Computers are a group of computers; every computer in the group is well thought-out as a node, which can be a distinct multi-core machine. Parallel processing structural designs can be categorized into two kinds: 1. Distributed memory design 2. Shared Memory design 9.1. Parallel processing of Compound Image Compression Algorithm Using Static Partitioning In static partitioning, the compound image is alienated into numerous assortments owing to the accessible quantity of processors; underneath are the stepladders to be exercised: 1. Divide the specified image into a lot of arrays according to the number of accessible processors. 2. Every processor is anticipated to affect chronological compression algorithm on its sub-image with the least dissemination amid the processors.
  • 9. P.S.Jagadeesh Kumar / Procedia Computer Science 38 (2013) 196–205 204 The excellence of the condensed image will be of poor quality against those obtained by chronological compression, since every processor will effort with a separation of the entire province and thus might not usually obtain a high-quality counterpart with its assortment blocks as resolved to be acquired exercising the chronological algorithm. This shortcoming can be surmounted by employing those processors containing adequate reminiscence to seize the intact image in the entire province. 9.2. Parallel processing of Compound Image Compression Algorithm Using Dynamic Partitioning In dynamic portioning, the compound image compression is parallel processed as pursued: 1. The unique image is divided into diminutive non-overlapping array blocks of identical dimension, for example, the size of 4x4, 8x8 or 16x16. 2. All the array blocks are alienated into compound task blocks. 3. Designer can explore its essential computing nodes and then ascertain correlation algorithm by requisite. 4. Designer generates a string for every computing node which has ascertained correlation algorithm. This string is accountable for transmitting compound task blocks to the analogous computing node and in receipt of the computational outcomes. 5. All computational outcomes are arranged according to the Block identity of compound task blocks and then printed into condensed dossier mutually. Conclusion Despite the fact that the alacrity of every individual processor prolong to climb at a continuous rapidity, the computational necessities ambitious by applications have forever out-paced the existing technology. This asserts the designer to ask for quicker and further lucrative processors. Parallel and distributed computing leave one step in advance to afford computing supremacy ahead of the technical limits of a solitary processor organization. Together hardware and software advances protract to formulate upgrading, apiece with its own appropriateness to a variety of applications. Lucrative hardware schemes are at present extensively available on hand. Software-based compression beforehand implausible is at the moment expanding from research to the commercial market. Single chip resolution are extensively obtainable, but well-organized intend by means of parallel processing of multiple chips and pipelining practices persist to effect in rapid encoder circuits for superior applications. Multimedia processors are fetching progressively trendier as they can be lithely entrenched in multifaceted applications. At the same time as numerous compound image compression elucidation can at this time be hold by a single processor, thoroughness to attain privileged picture quality and inferior bit rates prolong to constrain the delve into compression to expand new- fangled multifaceted procedures that can gain from the accessibility of parallel processing. Parallel processing can for that reason be very successful for high-quality compound image compression where regulation of constraints and compound passes of indoctrination are desired to produce the most excellent probable compressed compound image. References 1. Ritu Chourasiya, Prof. Ajit Shrivastava. A Study of Image Compression Based Transmission Algorithm Using SPIHT for Low Bit Rate Application. Advanced Computing: An International Journal, Vol.3, No.6, and November 2012. pp. 47-54. 2. K. S. Thyagarajan. Still Image and Video Compression with MATLAB. A John Wiley and Sons, Inc., Publication, 2011. 3. K.Somasundaram and I.Kaspar Raj. Low Computational Image Compression Scheme based on Absolute Moment Block Truncation Coding. World Academy of Science, Engineering and Technology 19 2008. pp. 974-979. 4. Kamrul Hasan Talukder, Koichi Harada. Haar Wavelet Based Approach for Image Compression and Quality Assessment of Compressed Image. IAENG International Journal of Applied Mathematics, 36:1, 2007. 5. Ritu, Ajit Shrivastava. A Study of Image Compression Based Transmission Algorithm Using SPIHT for Low Bit Rate Application. Advanced Computing: An International Journal, Vol.3, No.6, and November 2012. pp. 47-54. 6. K.S.Thyagarajan. Still Image and Video Compression with MATLAB. A John Wiley and Sons, Inc., Publication, 2011. 7. K.Somasundaram and I.Kaspar Raj. Low Computational Image Compression Scheme based on Absolute Moment Block Truncation Coding. World Academy of Science, Engineering and Technology 19 2008. pp. 974-979.
  • 10. P.S.Jagadeesh Kumar / Procedia Computer Science 38 (2013) 196–205 205 8. Kamrul Hasan Talukder, Koichi Harada. Haar Wavelet Based Approach for Image Compression and Quality Assessment of Compressed Image. IAENG International Journal of Applied Mathematics, 36:1, 2007. 9. S.T.Klein and Y.Wiseman. Parallel Huffman Decoding with Applications to JPEG Files. The Computer Journal, Vol. 46, No. 5, 2003. pp. 487-497. 10. Mohammad Mofarreh Bonab et al. A New Technique for Image Compression Using PCA. International Journal of Computer Science and Communication Networks 2(1), 2010, pp. 111-116. 11. Aysha.V, Kannan Balakrishnan. Parallel Genetic Alg. for Document Image Compression Optimization. Int. Con. on Electronics and Information Engineering, 2010, IEEE.