Satellite image compression technique
Upcoming SlideShare
Loading in...5
×
 

Satellite image compression technique

on

  • 305 views

Satellite image Compression reduces redundancy in data representation in order to achieve saving in the ...

Satellite image Compression reduces redundancy in data representation in order to achieve saving in the
cost of storage and transmission image compression compensates for the limited on-board resources, in
terms of mass memory and downlink bandwidth and thus it provides a solution to the (bandwidth vs. data
volume) dilemma of modern spacecraft Thus compression is very important feature in payload image
processing units of many satellites, In this paper, an improvement of the quantization step of the input
vectors has been proposed. The k-nearest neighbour (KNN) algorithm was used on each axis. The three
classifications considered as three independent sources of information, are combined in the framework of
the evidence theory the best code vector is then selected. After Huffman schemes is applied for encoding
and decoding.

Statistics

Views

Total Views
305
Views on SlideShare
305
Embed Views
0

Actions

Likes
0
Downloads
6
Comments
0

0 Embeds 0

No embeds

Accessibility

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

Satellite image compression technique Satellite image compression technique Document Transcript

  • Advanced Computing: An International Journal (ACIJ), Vol.5, No.1, January 2014 SATELLITE IMAGE COMPRESSION TECHNIQUE BASED ON THE EVIDENCE THEORY Khaled Sahnoun, Noureddine Benabadji Department of Physics, University of Sciences and Technology of Oran- Algeria Laboratory of Analysis and Application of Radiation ABSTRACT Satellite image Compression reduces redundancy in data representation in order to achieve saving in the cost of storage and transmission image compression compensates for the limited on-board resources, in terms of mass memory and downlink bandwidth and thus it provides a solution to the (bandwidth vs. data volume) dilemma of modern spacecraft Thus compression is very important feature in payload image processing units of many satellites, In this paper, an improvement of the quantization step of the input vectors has been proposed. The k-nearest neighbour (KNN) algorithm was used on each axis. The three classifications considered as three independent sources of information, are combined in the framework of the evidence theory the best code vector is then selected. After Huffman schemes is applied for encoding and decoding. KEYWORDS Evidence theory, Vector quantization, k-nearest neighbour ,compression, 1. INTRODUCTION The compression with vector quantization [1] accepts an input vector of dimension and replaced by a vector of the same size belonging to a dictionary which is a finite set of code also called classes, since they are calculated by an average iterative of vectors vectors , the quantization step based on the nearest neighbor rule: vector to classify is assigned to one class of under the condition that this assignment produces the smallest distortion, Such assignment rule may be too drastic in cases where the distances between the vector and other vector are very close a possible improvement to avoid this hard decision would be to consider each color components independently to obtain a classification by component, In this study, components (R,G,B) are considered as three independent information sources, In a first phase, the K-nearest neighbor rule is applied to all three components (R,G,B) then generating three sets of potential classes, This phase, taking into account K- neighbors and not one, allows considering uncertainty according to each of color components and to push decision making. The decision of the assignment final class of is done after combining these three classifications, This technique refers to methods of data fusion Among all the tools available in this domain, we decide to use the evidence theory [2] which makes it possible to process uncertain information and to combine information from several sources DOI : 10.5121/acij.2014.5102 17
  • Advanced Computing: An International Journal (ACIJ), Vol.5, No.1, January 2014 2. USE THE EVIDENCE THEORY 2.1. Basic principle Let a set of all possible classes for corresponding to the dictionary in our application. The evidence theory extends over the entire power of , noted , we define an initial mass function of in which satisfies the following conditions And (1) Where is the empty set , quantifies the belief that be given to the fact that the class belongs to the subset of . Subsets like are called focal elements. Two initial mass functions ( ) can be combined according to Dempster rule [3] Is the conflict factor and represents the detuning between the two sources ( the combination of Dempster also called orthogonal sum and denoted ), Note that , After combination, a decision on the most likely element of must be taken. Several decision rules are possible, we use the “maximum of Pignistic probability” Presented by Smets [4] which uses the Pignistic transformation, and allows to evenly distributing the weight associated with a subset of , on each of its elements is called the Cardinal of largest. , The decision then goes to the element of , where this value is the 2.2. Application to the vector quantization 2.3. The quantization We represent the information provided by each independent classification according to each of the three 03components R, G, and B by a function of initial mass, These functions are created after calculating the K-nearest neighbor (KNN) and before the final decision they allow to take into account the uncertainty associated with each axis thus classes that are very close to each other on the same axis are grouped in the same focal element and the decision is made after having combined the results of the other projections for each axis identifies the most significant elements “ ” by a distance on this axis. The initial mass function constructed according to the axis has three focal elements ( ) is the complement of . We construct , Is the vector for classifying and the vector its nearest neighbor depending on , is a constant greater than 1 to take into account the sensitivity of the human visual system according to the axis , if then is a singleton corresponding to the nearest 18
  • Advanced Computing: An International Journal (ACIJ), Vol.5, No.1, January 2014 neighbor, the masses are then assigned to the sets taking into account the distribution of elements in the set , the initial mass function for the axis is: Is a constant and . is the maximal distance between vector and of elements in the (R,G,B) space, is the average distance between each elements of , More is large more the mass of is small.( ) The three functions of initial mass from projections (R,G,B) respectively. Mass function resulting from the combination of this three functions is obtained from equation (02): The assignment class of the vector Pignistic probability equation (03) is selects from equation (09), on the basis of maximum of 3. HUFFMAN COMPRESSION FOR (R,G,B) The compression is achieved by compressing a range of values to a single quantum value, when the given number of discrete symbols in a given stream is reduced the stream becomes more compressible after the image is quantized, Huffman algorithm is applied. The Huffman has used a variable-length code table for the encoding of each character of an image where the variablelength code table is derived from the estimated probability of occurrence for each possible value of the source symbol, The Huffman coding has used a particular method for choosing the representation for each symbol which has resulted in a prefix codes, These prefix codes expresses the most common source symbols using shorter strings of bits than are used for less common source symbols. In this step we have achieved a compressed image [5]. 4 EXPERIMENTAL RESULTS AND DISCUSSION Different color images have been taken for experimental purpose, simulation results for different images are given in Table 01. For measuring the originality of the compressed image Peak Signal to Noise Ratio PSNR is used, which is calculated using the formula (10) Where MSE is the mean squared error between the original image and the reconstructed compressed image of the size MN [6], which is calculated by the equation The algorithm realized in Builder C++ 6.0 to code and to decode the satellite image256*256. 19
  • Advanced Computing: An International Journal (ACIJ), Vol.5, No.1, January 2014 Figure 1. Original Satellite image (1) Figure 2. Reconstructed Satellite image (1) 20
  • Advanced Computing: An International Journal (ACIJ), Vol.5, No.1, January 2014 Figure 3. Original image ‘Lena’ Figure 4. Reconstructed image ‘Lena’ 21
  • Advanced Computing: An International Journal (ACIJ), Vol.5, No.1, January 2014 Figure 5. Original Satellite image (2) Figure 6. Reconstructed Satellite image (2) 22
  • Advanced Computing: An International Journal (ACIJ), Vol.5, No.1, January 2014 Image PSNR(db) CR Lena 32.92 13.66 Satellite image (1) 34.68 15.89 Satellite image (2) 33.89 15.55 Table 1. The compression ratios CR and PSNR values derived for imageries It can be seen from the Table 01 that for all the images the PSNR values are greater than 32, the compression ratios CR achievable different it is clearly evident from the table that for two types of images with reasonably PSNR values clearly indicate that the compression ratio achievable for satellite imagere is much higher compared to the standard LENA image. 5. CONCLUSIONS To improve the quantization step of the input vectors according to the code vectors present in a dictionary using the evidence theory has obtaining promising results in our study, a vector quantization was performed on each of the three colors (R,G,B) according to the color dispersion of the K-nearest neighbor (K.N.N), The results show that the use of evidence theory during the quantization phase and Huffman coding is an improvement of the quality of the reconstructed satellite images 6. REFERENCES [1] [2] [3] [4] [5] [6] A. Gersho and R. M. Gray. Vector Quantization and Signal Compression. Kluwer Academic Publishers,1991. G.Shafer. A mathematical theory of evidence. Princeton University Press, 1976. A. Dempster. Upper and Lower Probablilities Induced by Multivalued Mapping. Ann. Math. Statist , 38:325– 339, 1967. P.Smets. Constructing the pignistic probability function in a context of uncertainty. Uncertainty in Artificial Intelligence, 5 :29–39, 1990. Elsevier Science Publishers. H.Kekre, Tanuja K Sarode, Sanjay R Sange “ Image reconstruction using Fast Inverse Halftone & Huffman coding Technique”, IJCA,volume 27-No 6, pp.34-40, 2011 V.Setia, Vinod Kumar, “Coding of DWT Coefficients using Run-length coding and Huffman Coding for the purpose of Color Image Compression”, IJCA, volume 27-No 6, pp.696-699, 2012. 23