Your SlideShare is downloading. ×
  • Like
Adaptive Neuro-Fuzzy Inference System based Fractal Image Compression
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×

Now you can save presentations on your phone or tablet

Available for both IPhone and Android

Text the download link to your phone

Standard text messaging rates apply

Adaptive Neuro-Fuzzy Inference System based Fractal Image Compression

  • 326 views
Published

This paper presents an Adaptive Neuro-Fuzzy …

This paper presents an Adaptive Neuro-Fuzzy
Inference System (ANFIS) model for fractal image
compression. One of the image compression techniques in
the spatial domain is Fractal Image Compression (FIC)
but the main drawback of FIC using traditional
exhaustive search is that it involves more computational
time due to global search. In order to improve the
computational time and compression ratio, artificial
intelligence technique like ANFIS has been used. Feature
extraction reduces the dimensionality of the problem and
enables the ANFIS network to be trained on an image
separate from the test image thus reducing the
computational time. Lowering the dimensionality of the
problem reduces the computations required during the
search. The main advantage of ANFIS network is that it
can adapt itself from the training data and produce a
fuzzy inference system. The network adapts itself
according to the distribution of feature space observed
during training. Computer simulations reveal that the
network has been properly trained and the fuzzy system
thus evolved, classifies the domains correctly with
minimum deviation which helps in encoding the image
using FIC.

Published in Technology
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads

Views

Total Views
326
On SlideShare
0
From Embeds
0
Number of Embeds
0

Actions

Shares
Downloads
22
Comments
0
Likes
0

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. ACEEE International Journal on Signal and Image Processing Vol 1, No. 2, July 2010 Adaptive Neuro-Fuzzy Inference System based Fractal Image Compression Y.Chakrapani1, K.Soundararajan2 1 ECE Department, G.Pulla Reddy Engineering College, Kurnool, India Email: yerramsettychakri@gmail.com 2 ECE Department, J.N.T University, Ananthapur, IndiaAbstract—This paper presents an Adaptive Neuro-Fuzzy related to large computational time for imageInference System (ANFIS) model for fractal image compression.compression. One of the image compression techniques in In order to reduce the number of computationsthe spatial domain is Fractal Image Compression (FIC) required many sub-optimal techniques have beenbut the main drawback of FIC using traditionalexhaustive search is that it involves more computational proposed. It is shown that the computational time cantime due to global search. In order to improve the be improved by performing the search in a small sub-computational time and compression ratio, artificial set of domain pool rather than over the whole space.intelligence technique like ANFIS has been used. Feature Clustering purpose is to divide a given group of objectsextraction reduces the dimensionality of the problem and in a number of groups, in order that the objects in aenables the ANFIS network to be trained on an image particular cluster would be similar among the objects ofseparate from the test image thus reducing the the other ones [4]. In this method ‘N’ objects arecomputational time. Lowering the dimensionality of the divided into ‘M’ clusters where M<N, according to theproblem reduces the computations required during the minimization of some criteria. The problem is tosearch. The main advantage of ANFIS network is that itcan adapt itself from the training data and produce a classify a group of samples. These samples formfuzzy inference system. The network adapts itself clusters of points in a n-dimensional space. Theseaccording to the distribution of feature space observed clusters form groups of similar samples. Data clusteringduring training. Computer simulations reveal that the algorithms can be hierarchical. Hierarchical algorithmsnetwork has been properly trained and the fuzzy system find successive clusters using previously establishedthus evolved, classifies the domains correctly with clusters. Hierarchical algorithms can be agglomerativeminimum deviation which helps in encoding the image ("bottom-up") or divisive ("top-down"). Agglomerativeusing FIC. algorithms begin with each element as a separateIndex Terms—ANFIS. FIC, Standard deviation, Skew, cluster and merge them into successively largerneural network clusters. Divisive algorithms begin with the whole set and proceed to divide it into successively smaller I. INTRODUCTION clusters. A fuzzy system is the one which has the capability to Fractal Image compression method exploits estimate the output pattern by considering the inputsimilarities in different parts of the image. In this an patterns based on the membership functions created. Itimage is represented by fractals rather than pixels, works with the implementation of rules which arewhere each fractal is defined by unique Iterated written based on the behaviour of the systemFunction System (IFS) consisting of a group of affine considered. The main disadvantage of the fuzzytransformations. FIC employs a special type of IFS systems is the large computational time required incalled as Partitioned Iterated Function System (PIFS). tuning the rules. An artificial neural network (ANN),Collage Theorem is employed for PIFS and gray scale often just called a "neural network" (NN), is aimages which is equivalent to IFS for binary images. mathematical model or computational model based onThe collage theorem performs the encoding of gray biological neural networks. It consists of anscale images in an effective manner. The key point for interconnected group of artificial neurons and processesfractal coding is to extract the fractals which are information using a connectionist approach tosuitable for approximating the original image and these computation [5].fractals are represented as set of affine transformations The main objective of this paper is to develop an a[1-3]. Fractal image coding introduced by Barnsley and technique by incorporating the combined concept ofJacquin is the outcome of the study of the iterated fuzzy and neural network called as Adaptive Neuro-function system developed in the last decade. Because fuzzy Inference System. This technique incorporatesof its high compression ratio and simple decompression the concept of NN in creating the rules and produces amethod, many researchers have done a lot of research fuzzy model based on Tagaki-Sugeno approach. Thison it. But the main drawback of their work can be fuzzy inference system can be employed to classify the domain pool blocks of a gray level image, thus 20© 2010 ACEEEDOI: 01.ijsip.01.02.04
  • 2. ACEEE International Journal on Signal and Image Processing Vol 1, No. 2, July 2010improving the encoding time. In view of this, Section II wi  F  (3)deals with the concept of FIC. The self and affine k We define the k-th iteration of W , W  E  to betransformations along with the collage theorem aredealt with. Section III deals with the concept of ANFIS. W 0  E =E, W k  E =W  W  k−1   E Results and Discussions are explained in Section IV. For K ≥1 then we haveConclusions are drawn in Section V. k W E (4) k II. FRACTAL IMAGE COMPRESSION The sequence of iteration W  E  converges to the attractor of the system for any set E . This means The fractal image compression algorithm is based on that we can have a family of contractions thatthe fractal theory of self-similar and self-affine approximate complex images and, using the family oftransformations contractions, the images can be stored and transmittedA. Self-Affine and Self-Similar Transformations in a very efficient way. Once we have a LIFS it is easy In this section we present the basic theory involved to obtain the encoded image.in Fractal Image Compression. It is basically based on If we want to encode an arbitrary image in this way,fractal theory of self-affine transformations and self- we will have to find a family of contractions so that itssimilar transformations. A self-affine transformation attractor is an approximation to the given image. n n Barnsley’s Collage Theorem states how well the W : R  R is a transformation of the form attractor of a LIFS can approximate the given image. W  x  =T  x +b , where T is a linear n ntransformation on R and b∈ R is a vector. (a) Collage Theorem: A mapping W : D D, D⊆R n is called a Let { w 1 , .. .. w m } be contractions on R n so thatcontraction on D if there is a real number ∣wi  x −wi  y ∣≤c∣x− y∣, ∀ x,y∈R ∧∀ i , n c, 0 <c<1 such that Where c< 1 . Let E⊂ Rn be any non-empty d  W  x  ,W  y ≤cd  x,y  for x,y ∈D and compact set. Thenfor a metric d on .The real number c is called the 1contractivity of W . w i  E  (5) d  W  x  ,W  y =cd  x,y  then W is called 1−c a similarity. Where F is the invariant set for the w i and d is A family { w 1 , .. .. w m } of contractions is known the Hausdorff metric. As a consequence of this theorem, any subset R nas Local Iterated function scheme (LIFS). If there is a can be approximated within an arbitrary tolerance by asubset F ⊆D such that for a LIFS { w 1 , .. .. w m } self-similar set; i.e., given δ> 0 there exist contacting wi  F  (1) similarities { w1 , .. .. wm } with invariant set F Then F is said to be invariant for that LIFS. If satisfying d  E,F <δ . Therefore the problem of F is invariant under a collection of similarities, Fis known as a self-similar set. Let S denote the class finding a LIFS { w1 , .. .. wm } whose attractor F isof all non-empty compact subsets of D . The δ - arbitrary close to a given image I is equivalent toparallel body of A∈S is the set of points within minimizing the distance w i  I  .distance δ of A, i.e. B. Fractal Image Coding A δ= { x ∈ D:∣x−a∣≤δ,a∈ A } (2) The main theory of fractal image coding is based on Let us define the distance d  A,B  between two iterated function system, attractor theorem and Collagesets A,B to be theorem. Fractal Image coding makes good use of d  A,B =inf { δ : A⊂B δ ∧B⊂Aδ } Image self-similarity in space by ablating image gemetric redundant. Fractal coding process is quite The distance function is known as the Hausdorff complicated but decoding process is very simple, whichmetric on S . We can also use other distance makes use of potentials in high compression ratio.measures. Fractal Image coding attempts to find a set of Given a LIFS { w 1 , .. .. w m } , there exists an unique contractive transformations that map (possibly overlapping) domain cells onto a set of range cells thatcompact invariant set F , such that tile the image. One attractive feature of fractal image w i  F  , this F is known as attractor of the system. compression is that it is resolution independent in theIf E is compact non-empty subset such that sense that when decompressing, it is not necessary that w i  E ⊂E and 21© 2010 ACEEEDOI: 01.ijsip.01.02.04
  • 3. ACEEE International Journal on Signal and Image Processing Vol 1, No. 2, July 2010the dimensions of the decompressed image be the same matches are then performed for those domains thatas that of original image. belong to a class similar to the range. The feature The basic algorithm for fractal encoding is as computation serves to identify those domains belongingfollows to the class of sub images whose feature vectors are • The image is partitioned into non overlapping within the feature tolerance value of the feature vector range cells { R i } which may be rectangular or belonging to the range cell. More sophisticated classification schemes use a predefined set of classes. any other shape such as triangles. In this paper A classifier assigns each domain cell to one of these rectangular range cells are used. classes. During encoding, the classifier assigns the • The image is covered with a sequence of range cell to a class and domain range comparisons are possibly overlapping domain cells. The performed only against the domains of the same class domain cells occur in variety of sizes and they as the range cell. The classification employed in this may be in large number. work is based on back-propagation algorithm that is • For each range cell the domain cell and trained on feature vector data extracted from domain corresponding transformation that best covers cells obtained from an arbitrary image. the range cell is identified. The Two different measures of image variation are used transformations are generally the affined here as features. These particular features were chosen transformations. For the best match the as representative measures of image variation. The transformation parameters such as contrast specific features used in this work are: and brightness are adjusted as shown in Figure (a) Standard deviation  σ  given by 1  nr nc • The code for fractal encoded image is a list 1 consisting of information for each range cell σ= ∑ ∑ p − μ 2 n r n c i=1 j=1  i,j (6) which includes the location of range cell, the domain that maps onto that range cell and Where μ is the mean or average pixel value over parameters that describe the transformation the n r ×nc rectangular image segment and p i,j is mapping the domain onto the range the pixel value at row i , column j . (b) Skewness, which sums the cube of differences between pixel values and the cell mean, normalized by the cube of σ is given by nr n c  p − μ 3 1 ∑ ∑ i,j 3 nr n c i= 1 j=1 (7) σ Figure1: Domain-Range Block TransformationsC. Clustering of an Image III. ADAPTIVE NEURO-FUZZY INFERENCE Clustering of an image can be seen as a starting step SYSTEMto fractal image compression. Long encoding times ANFIS serves as a basis for constructing a set ofresult from the need to perform a large number of fuzzy if-then rules with appropriate membershipdomain-range matches. The total encoding time is the functions to generate the stipulated input-output pairs.product of the number of matches and the time required Since the main disadvantage of creating a fuzzy systemfor each match. The classification algorithm reduces lies in the tuning of the rules, so the concept of neuralthe encoding time significantly. Classification of networks can be incorporated in order to create thedomain and ranges is performed to reduce the number rules and membership functions. Generally the fuzzyof domain-range match computations. Domain-range inference system can be shown as Knowledge base Data Rule Base Base Crisp Fuzz Fuzz Crisp Fuzzifier y Inference Controller y Controller Defuzzifier Inputs engine Outputs Fuzzy Logic Controller Figure 2: Schematic diagram of Fuzzy building blocks 22© 2010 ACEEEDOI: 01.ijsip.01.02.04
  • 4. ACEEE International Journal on Signal and Image Processing Vol 1, No. 2, July 2010 The structure of the FLC resembles that of aknowledge-based controller except that the FLC utilizesthe principles of fuzzy set theory in its datarepresentation and its logic. The NN structure is designedin such a manner that it trains from the data provided toit. The neural network structure considered for trainingthe fuzzy inference system is shown in figure 3 1 1 2 1 Skew σ 2 3 Figure. 4: Gray level Image of Lena The data for standard deviation and skewness is collected and their corresponding class of the domain Output pool is carried out. As a result 3969 data has been Input Hidden Layer collected for this work. Out of this 3000 data has been Layer Layer considered for training of ANFIS and remaining 969 data has been considered for testing of the data. Figure 3: Architecture of NN considered Figure 5 shows the architectural model of ANFIS with two input nodes and one output node considered IV. RESULTS AND DISCUSSIONS for this work. Figure 6 and figure 7 show the comparison between the acquired and desired outputs A gray level image of Lena of size 128×128 has during the testing pattern.been considered for training the network using NN andobtaining the structure of ANFIS. A domain pool iscreated having domains of size 4×4 for the aboveimage. The standard deviation and skewness for differentdomains of the above image are calculated usingequations (6-7) and the domains are assigned to specificclasses based on their values of standard deviation andskewness. The network is trained with the abovecharacteristics of Lena image and the performance is alsotested through the ANFIS structure with other gray levelimage of Barbara of size 256×256 . The computersimulations have been carried out inMATLAB/SIMULINK environment on Pentium-4processor with 1.73 GHz and 256 MB RAM and theresults have been presented. Figure 4 shows the image ofLena considered for this work. Figure. 5: Architectural model of ANFIS Figure. 6: Comparison between desired and acquired outputs Figure. 7: A clear view of Figure 6Figure 7 is a clear representation of comparison for the the obtained class are properly superimposed whichabove case by considering a specific number of values. indicate that the network has been properly trainedIt can be seen from the figures that the desired class and and it also gives proper output. Figures 8 and 9 show 23© 2010 ACEEEDOI: 01.ijsip.01.02.04
  • 5. ACEEE International Journal on Signal and Image Processing Vol 1, No. 2, July 2010the reconstructed images using FIC with ANFIS FIC using exhaustive search method. It is due to thestructure along with the original images of Barbara and reason that the domain-range block comparison isLena. Table 1 shows the performance of ANFIS based performed only with the domain pool blocks whoseFIC in terms of PSNR, Compression ratio and Encoding classification is same as that of range block which intime over that of FIC using Exhaustive search method turn reduces the encoding time. It can also be seen[9]. As seen from Table 1 the PSNR value for the gray from the table that the encoding time has beenlevel image using ANFIS based FIC is less than that of greatly reduced by the above proposed technique. Table 1: Comparison of FIC and ANFIS based FIC Image PSNR (dB) Compression ratio (bpp) Encoding time (sec) FIC FIC with FIC FIC with FIC FIC with ANFIS ANFIS ANFIS Lena 35.26 32.434 1.2:1 6.73:1 8600 2350 Barbar 32.67 30.788 1.2:1 6.73:1 8400 2209 a 4 Figure 8a: Original image of Lena Figure 8b: Reconstructed Image of Lena Figure 9a: Original image of Barbara Figure 9b: Reconstructed Image of Barbara V. CONCLUSIONS conventional methods require a deep understanding of the system dynamics which involves deep mathematicalAn ANFIS based network which classifies the analysis whereas these artificial intelligence techniquesdomain cells of a gray level image based on its can adapt to the system characteristics easily. Computerstatistical characteristics has been proposed to simulations reveal that performance of ANFIS networkperform fractal image compression. The 24© 2010 ACEEEDOI: 01.ijsip.01.02.04
  • 6. ACEEE International Journal on Signal and Image Processing Vol 1, No. 2, July 2010 based fractal image compression is greatly improved [2] M.F.Barnsley and A.E.Jacquin, “Application of recurrent in terms of encoding time without degrading the iterated function system to images”, Proc SPIE 1001(3), image quality compared to the traditional fractal 122-131(1998) [3] A.E.Jacquin, “Fractal image coding a review”, image compression which employs exhaustive Proceedings of IEEE 81(10), 212-223(1993). search. [4] V. Fernandez, R Garcia Martinez, R Gonzalez, L. Rodriguez, “Genetic Algorithms applied to Clustering”, REFERENCES Proc of International conference on Signal and Image Processing, pp 97- 99[1] A.E.Jacquin, “Image coding based on a fractal theory [5] Jyh-Shing Roger Jang, “ANFIS: Adaptive-Network-Based of iterated contractive image transformation”, IEEE Fuzzy Inference System”, IEEE Trans. on Systems, Man Trans. Image Processing, Vol 1, 1992, pp 18-30. and Cybernetics, vol. 23, no. 3, pp. 665-685,May 1993 25© 2010 ACEEEDOI: 01.ijsip.01.02.04