2012 9th International Conference on Fuzzy Systems and Knowledge Discovery (FSKD 2012)           Digital Image Watermarkin...
Freeman et al. [6-7] used the Markov Network to create the                                      arg min ∑ bi − D βi       ...
When the low-resolution image block dictionary D and the                •    Code distance is actually the hamming distanc...
Then, scanning watermarkW into a R2 × R2 -length message             For the tests of our algorithm, the test parameters w...
watermark. Fig. 3 to Fig. 6 shows the difference between with                        Fig. 3 to Fig. 4 shows that the new a...
Upcoming SlideShare
Loading in …5

Digital Image Watermarking Based on Super- Resolution Image Reconstruction


Published on

services on...... embedded(ARM9,ARM11,LINUX,DEVICE DRIVERS,RTOS) VLSI-FPGA DIP/DSP PLC AND SCADA JAVA AND DOTNET iPHONE ANDROID If ur intrested in these project please feel free to contact us@09640648777,Mallikarjun.V

Published in: Education
  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Digital Image Watermarking Based on Super- Resolution Image Reconstruction

  1. 1. 2012 9th International Conference on Fuzzy Systems and Knowledge Discovery (FSKD 2012) Digital Image Watermarking Based on Super- Resolution Image Reconstruction Xiangbin Feng Yonghong Chen College of Computer Science & Technology College of Computer Science & Technology Huaqiao University Huaqiao University Xiamen, China Xiamen, ChinaAbstract—In this paper we present a novel algorithm based on than the image, while the image quality is maintained. ToSuper-Resolution Image Reconstruction (SRIR). We use pattern enhance the robustness of embedded watermark, in this paper arecognition method to optimize the performance of digital digital image watermarking algorithm based on SRIR iswatermarking. First, the binary watermarking is scanned to one- proposed. The main work is to implement the super-resolutiondimension sequence before embedding, at the same time ,we reconstruction of image sparse representation on processingchose a mixed error-correcting code—(3,1,2)convolutional code carrier images. In this paper, we choose a kind of super-and (3,1) repetition code to encode the original watermarking, resolution reconstruction method which original image is firstand the sequence is inputted into the (3,1,2) convolution encoder reduced and then enlarged. In the process of amplification, theand (3,1) repetition encoder frame by frame. The output filling of the pixel information greatly eliminates thesequence is scanned to some matrixes as the watermarkinginformation. Second, the super-resolution reconstruction of correlation among the original image pixel which can enhanceimage sparse representation is implemented on carrier image. the robustness of the watermark. And the mixed error-Then, block the image and the watermarking which is encoded correcting coder can add more redundancy among codes andwith mixed error-correcting code is embedded in low frequency increase the error correcting capability of decoder. Finally,band of the Discrete Wavelet Transform (DWT) repeatedly. block the image and the watermarking which is encoded withExperimental results show that our image watermarking scheme mixed error-correcting code is embedded in low frequencywith SRIR is better than the traditional one which is without band of the Discrete Wavelet Transform (DWT) repeatedly.SRIR, not only invisible, but also robust against various common The results show that our image watermarking scheme withsignal processing (such as JPEG compression, salt and pepper SRIR is better than the traditional one.noise, Gaussian low-pass filtering, and median filtering). This work presents a security, fast, and efficient image Keywords-digital image; digital watermarking; error-correcting watermarking algorithm. The rest of this paper is organized ascode (ECC); Super-Resolution Image Reconstruction follows. Section Ⅱ introduces the principle of SRIR. Section Ⅲ then describes the coding and decoding method of I. INTRODUCTION convolutional code and introduce the repetition code. Section With the rapid development of Internet, watermarking Ⅳ presents the watermarking insertion and extraction. Sectionsecurity is raising a great deal of interest in both academia and Ⅴ presents the experimental results. Conclusions are drawn inindustry during the last few years. Overall, the digital Section Ⅵ.watermarking algorithm can be divided into two categories,that is spatial domain and frequency domain, and frequency II. SUPER-RESOLUTION IMAGE RECONSTRCTIONdomain method is much better than spatial domain method [1]. SRIR produces a high-resolution image from several low-For instance, the wavelet transform domain has a good time- resolution images, can also eliminate additive noise and thefrequency domain features, it can be used as a multi-resolution distinction which has been come from limited detector of sizewavelet analysis method which is a commonly used key and optical components. From the viewpoint of mathematical,technology. The basic idea is that decompose the image with super-resolution problem is a pathological inverse problem, inmulti-resolution decomposition technology, and the image will order to solve this problem, there must be a reasonable prioribe decomposed into many different space, different frequency assumption. There are Gaussian Process Priors model [3],sub-images [2]. Watermarking has two most important Huber MRF model [4], Total Variation model [5], and so on.properties: transparency and robustness. For the image The above models are commonly used in priori assumptionwatermarking, the invisibility is defined that carrier image is model. However, to reach a better result, it needs to solve notnot significantly degraded after embedding. The robustness only the image sub-pixel alignment problem, but also therefers to the ability that the watermark will not lose after a kind problem which need to meet the number of low-resolutionof common signal processing operations. images with magnification proportional to the square. A key point of the watermarking technique is the trade-off Therefore, if the scheme uses reconstruction, when the numberbetween the transparency and robustness. From the viewpoint of low-resolution images is not enough, the quality of imageof intellectual protection, the watermarking is more important will severely degrade with increasing the magnification. This work is supported by Huaqiao University Science and TechnologyFoundation (No.10Y0199 and No.JB-ZR1131), the Project-sponsored by SRFfor ROCS, SEM and the Natural Foundation of Fujian Province of China(No.2011J05151) 978-1-4673-0024-7/10/$26.00 ©2012 IEEE 1778
  2. 2. Freeman et al. [6-7] used the Markov Network to create the arg min ∑ bi − D βi 2 + ω βi (2)corresponding relationship between low-resolution and high- D ,β i 2 0resolution image block, and used Bayesian Belief Propagationalgorithm to find a local maximum of posteriori probability to Where bi is a training samples, βi is the sparsesolve, and this is the first algorithm which used machine representation of training samples b in dictionary D , ω is thelearning for super-resolution. regularization parameter. According to K-SVD algorithm [13], In this paper, we focus on a single original watermark orthogonal matching pursuit algorithm is used to solve sparseimage super-resolution. The same to the above mentioned representation of signals, besides only update ck which is themethod which based on learning, we also use the method based element of k column in dictionary D and correspondingon image block. Secondly, under the goal of sparse representation coefficient bTk . And don’t consider βi 0 , then therepresentation of the image, searching for the over-completedictionary which generates by the LR/HR image block and is above problem can be described as an optimization problem: 2isomorphic under the spare representation. Then, using this K ∑ = bi − ∑ c j βTj 2 2dictionary to get sparse representation of the test image block, bi − D β i = bi − D β i Fand the output is a high-resolution brightness image. i 2 j =1 F 2A. Sparse Representation of Image ⎛ ⎞ = ⎜ B − ∑ c j βTj ⎟ − ck βTk = Ek − ck βTk 2 (3) The basic principle of sparse represent of image can be ⎝ j ≠k ⎠ F Fdescribed as follow: Where Ek is used to represent the residuals between elements Assuming that b ∈ R m is a vector representation of the which are in dictionary except the element of k column andimage block, D ∈ R m× n is an over-complete dictionary. And images (blocks). In order to make Ek − ck βTk 2 to achieve the • 0 represents the number of non-zero elements of a vector, Fδ is the linear correlation minimum value of column vector of minimum, requiring ck β k T similar to Ek , the singular value decomposition of Ek is U ΔV T , so we need let ck represent δD , while δ 0 < , the sparse representation δ ∈ R n for b is the first column of U and β Tk represent the first column of V 2unique [8], thus times Δ (1,1) . min β 0 , s.t.b = D β (1) C. Super-resolution algorithm To get the exact solution of (1) had been proved to be a NP- In this paper, the super-resolution reconstruction of imagehard problem, thus, approximate solution was usually sparse representation is only implemented on luminanceconsidered. The earliest and simplest is the Match Pursuit component, while bilateral interpolation filter is implementedalgorithm [9], and the Orthogonal Match Pursuit algorithm on the other two components, correspondingly [14].which is an improved algorithm for the Match Pursuitalgorithm [10]. Greedy idea is used to both of the above For image super-resolution, two dictionaries are required,algorithm, an atom which is the best one is selected and makes that D is a low-resolution image block dictionary and athe inner product between the atom and the residual of former corresponding high-resolution image block dictionary isW . Instep maximize. Meanwhile, another approximate solution is order to meet isomorphism of the two dictionaries under sparsecalled the Basis Pursuit algorithm [11], the algorithm which representation, we need to solve the following equation: min ∑ b − Dβ 2 2replaced l 0 norm with l1 norm and transformed the above i i + ω0 xi − N β i 2 + ω βi 0 (4)issue from a non-convex problem into a convex optimization β D, N , i 2problem, and could solve by the linear programming algorithm. where bi is the luminance component vector representation ofIn addition, the Focal Underdetermined System Solver [12] a low resolution training image blocks, xi is the correspondingwhich replaced l p norm with l 0 norm and energy would be vector representation of a high resolution training imageconcentrated by iterative, although the method would get a blocks, D is a low-resolution image block dictionary and N ismore accurate solution, making the problem no longer a the corresponding high-resolution image block dictionary, βiconvex optimization problem and very sensitive to noise. is both suitable for dictionary D and N , ω0 denoteB. Dictionary Learning Algorithm regularization parameter for the second item and ω denote For image (block) sparse representation, in addition to the regularization parameter for the third item, respectively. Usingabove mentioned algorithm which is used to solve sparse the K-SVD algorithm, (4) can be described as an optimizationrepresentation, the construction of over-complete dictionary is problem:also very important. Finding the optimal basis of structure min ∑ 2under sparse representation is called dictionary learning, it not zi − P β i + ω βi 0 (5) β p, i 2only to meet the constraint condition of unique sparserepresentation, but also get more sparse and accurate ⎛ b ⎞ ⎛ D ⎞ Where zi = ⎜ i ⎟ , P = ⎜ ⎟.representation. And for all of the training set, need to solve: ⎝ ω0 xi ⎠ ⎝ ω0W ⎠ 1779
  3. 3. When the low-resolution image block dictionary D and the • Code distance is actually the hamming distance weighthigh-resolution image block dictionary N have been trained, after binary addition of corresponding code letters infor each low-resolution test image block, the BP algorithm is C1 and C2 . Due to the closeness of linear convolutionalimplemented to solve the sparse representation coefficient βi code, if C1 + C2 = C , so C is also one of the outputwhich is suitable for the low-resolution dictionary D : block section, the rule can be generally described as min β 1 , s.t.bi = D βi (6) D ( C1 , C2 ) = W ( C1 + C2 ) = W ( C ) = W ( C + 0 ) = d ( C,0) Then high- resolution image block can be reconstructed by • The error correcting capability of decoder lies on thexi = N β i , and the corresponding bilateral filter is designed as: minimum distance among the output sequences. And maximum likelihood decoding is usually used to find Hi = 1 ( ) ( ∑ H g1 H i ↓ − H j ↓ g 2 xi − x j ki j ↓∈Ω j ↓ ) (7) the minimum distance. The principle of maximum likelihood decoding is described as follows: computewhere i and j represent pixel positions in the high-resolution all the hamming distances between the received frameluminance image, i ↓ and j ↓ represent pixel positions in the sequence and corresponding original one. And whilelow-resolution UV color image under down-sampling, g1 is receiving a frame, the decoder compares with all thespatial filter kernel, g 2 is Range Filter Kernel, H j ↓ is the UV block sections and chooses a most likelihood one,value in j ↓ position, ki is regular factor and H i represents making the whole input code sequence correspondingthe UV value that contains more information to optimize. to minimum distance in the end. C. Repetition Coding III. CODING AND DECODING METHOD The rule of repetition coding is repeating each originalA. Coding method of Convolutional Code signal of a watermark N times in block section, named block In this paper, the ( 3,1, 2 ) convolutional encoder is used to section ( N ,1) . And on the decoding side, we use the majorityencode the original watermarking. Convolutional encoder is a elements of the block section to reconstruct the original signal.finite memory system. The ( 3,1, 2 ) convolutional encoder is In this paper, let N = 3 , so ( 3,1) repetition code is used todesigned as follows: encode watermarking. The mixed combination of c = [ m0 , m1 , m2 ,..., mi ,...] × A = M × G∞ (8) convolutional code and repetition code once more enhances the error correcting capability of decoder and gives a larger errorWhere correcting extent, ⎡1 1 1 0 1 0 0 0 1 0 0 0 0 0 0 ⎤ ⎢0 0 0 1 1 1 0 1 0 0 0 1 0 0 0 ⎥ IV. WATERMARK INSERTION AND EXTRACTION ⎢ ⎥ ⎢0 0 0 0 0 0 1 1 1 0 1 0 0 0 1 ⎥ A. Watermark Insertion ⎢ ⎥ A=⎢ 1 1 1 0 1 0 ⎥ We use a gray image Ι in our experiment which size is ⎢ 0 0 0 1 1 1 ⎥ ⎢ ⎥ R1 × R1 as carrier image and the watermark W is a R2 × R2 ⎢ 0 0 0 0 0 0 ⎥ binary image. Meanwhile, ( R1 × R1 ) / ( 2 R2 × 2 R2 × P × N ) must ⎢ ⎥ ⎣ ⎦ acquire to be a integer. The main steps of the embedding procedure which based on SRIR can be described as follows. And let M = [ m0 , m1 , m2 ,..., mi ,...] denotes the endless inputmessage sequence. The generator matrix G∞ of ( 3,1, 2 ) First, in the carrier-side, the super-resolution reconstruction of image sparse representation is implemented on originalconvolutional encoder is a semi-unlimited matrix, in which image. In the super-resolution reconstruction experiment, letrows and lines are countless. The input message sequence is patch size for the low resolution input image is 3 × 3 , overlapdivided into some k -length message blocks, meanwhile, to between adjacent patches is 1 and for the corresponding high-increase the code efficiency, the whole message sequence is resolution image, all image blocks are 12 ×12 , overlap is 4sent into encoder frame by frame and each frame contains L pixels. Meanwhile, training the coupled dictionary for super-message blocks. In ( 3,1, 2 ) convolutional encoder, we let k = 1 , resolution, we let number of patches to sample as theL =8, and there are 2kL = 256 original sequences for detection. dictionary is 50000 and size of the dictionary is 1024. We down sample the input image and do super-resolution usingB. Decoding method of Convolutional Code sparse representation on the luminance domain only. Finally, The performance of convolutional code lies on code the output image I ~ is used as input to the next step.distance and decode method. In the paper, the decode principle Divide I ~ into 2 R1 × 2 R1 blocks and a sub-block sequenceis described as follows: A ( A = A1 , A2 , A3 , , Ak ) is obtained. According to the degree • Let C1 and C2 denote two different binary block of texture complexity, A is arranged to sections which are randomly outputted from the same A~ ( A~ = Aa1 , Aa 2 , Aa 3 , , Aak ) , where section of G∞ . A~ = Aa1 ≤ Aa 2 ≤ Aa 3 ≤ Aak . In A ~ , the anterior ( P × N ) sub- blocks are chose for embedding with code message. 1780
  4. 4. Then, scanning watermarkW into a R2 × R2 -length message For the tests of our algorithm, the test parameters were setsequence M . Divide M into some k -length message blocks. as follows. The 512 × 512 gray original image and 64 × 64 L message blocks as a frame are inputted into convolutional binary watermark are used in our experiment and shown inencoder which is ( 3,1, 2 ) convolutional encoder in this letter Figure 1. For SRIR, setting the zoom factor is 4 and using theproposes. S denotes the whole output code sequence and its sparse representation directly. And mixed ECC is combined bylength is P × R2 × R2 . ( 3,1, 2 ) convolutional code and ( 3,1, 2 ) repetition code, L and Finally, scan S into matrixes which size of each matrix is β are set to 8 and 7, respectively. According to the experiment, R2 × R2 and the number of matrix is S as the code message we get PSNR = 55.6378dB . Experimental result shows that thematrixes (W1 ,W2 , W3 , , WS ) . Then using ( 3,1) repetition code watermarked image isn’t perceptually different from theto code each message matrix N times repeatedly and using the original gray image.Discrete Wavelet Transform (DWT) to orderly embed these To evaluate the performance of our scheme based on SRIR,coded matrixes into the low frequency band of each chosen we had watermarked image suffered some attacks. Accordingsub-block as watermarking. We use the way of pixel-to-pixel to to our experiment, the method based on SRIR is robust toembed. And an addition rule can be generally described as lower quality JPEG compression, salt and pepper noise, ( ) Bi′ = IDWT ⎡ W j + γ × BiLxy ⊕ BiH xy ⎤ ⎣ ⎦ multiplicative noise, and center cutting signal processing (show Figure2). (a) JPEG compression with quality-50%, where γ denotes the set of parameter of the embedder, BiLxy NC = 0.9676 ; (b) salt–and-pepper noise with density-0.05,is the low frequency sub-band, BiH xy is the high frequency sub- NC = 0.9765 ; (c) multiplicative noise with density-0.01,bands at scale 1. W j and B′ represent one of coded matrixes NC = 0.7976 ; (d) center cutting with size- 200 × 200 , someand one watermarked sub-block, respectively. watermark messages are lost, NC = 1 . And the results of other attack parameter experiments are showed as Table Ⅰ. ResultsB. Watermark Extraction Process show that the proposed algorithm based on SRIR has better The watermark extraction is carried out in a similar robustness against all kinds of attacks than the traditionalprocedure. The original image I and generator matrix G∞ are method which is without SRIR.needed. The first and key step is how to effectively decode.With regard to repetition decoding, the N multiple extractedmessages are reconstructed to one multiple code messagematrixes (W1 ,W2 , W3 , , WS ) according to principle of thefraction obeying the majority. Convolutional decoding is usedto search the most likelihood block sections for each receivedframe in decoder. The P message matrixes are scanned to aone-dimension sequence which is constituted by some frames. Figure 2. The extracted watermarkAccording to maximum-likelihood-decoding principle,comparing with block sections and choosing the most TABLE I. COMPARISON OF PARAMETER UNDER TYPICAL ATTACKSlikelihood one for each frame. Finally, link all the most Attack Parameter Without SRIR With SRIRlikelihood block sections back to the code sequence S which Testwill be returned to the original message sequence M . 3×1 Gaussian PSNR 30.1173 35.0935 Low-Pass NC 0.8205 0.9530 V. EXPERIMENTAL RESULTS AND EVALUATIONS Filtering 3×3 PSNR 31.1424 39.2861 In this section we present the results of our proposed Median Filtering NC 0.8582 0.9883algorithm. Salt-and- PSNR 22.4826 22.5820 Pepper Noise - 0.02 NC 0.9971 0.9987 Salt-and- PSNR 20.6004 20.6436 Pepper Noise - NC 0.9926 0.9966 0.03 Salt-and- PSNR 19.3139 19.4277 Pepper Noise - 0.04 NC 0.9847 0.9906 Salt-and- PSNR 17.6199 17.6596 Pepper Noise - NC 0.9586 0.9693 0.06 Obviously, it can be observed that our proposed watermark scheme specially grants a higher degree of robustness to resist JPEG lossy compression and salt-and-pepper noise signal Figure 1. Experimental result processing. Even if the watermarked image is seriously disturbed, our method can still preserve the structures of the extracted watermark and we can easy to identify the 1781
  5. 5. watermark. Fig. 3 to Fig. 6 shows the difference between with Fig. 3 to Fig. 4 shows that the new approach is superior toand without SRIR. It’s easily find that the method with SRIR is the traditional one, although the decline in the quality ofmuch better than the one which is implemented only with watermarked image, the value o PSNR and NC are significantlymixed ECC. increased. Meanwhile when the JPEG quality is higher than 30, the commercially value is better and certifying that the 65 robustness is visibly improved. And we can see from Fig. 5 to Fig. 6, there are no major differences in the PSNR between the PSNR 45 PSNR With new and traditional method, and the value of NC also reveals SRIR the prominent advantage of using our algorithm. 25 PSNR Without SRIR VI. CONCLUSION 10 30 50 70 90 In this paper, a novel robustness watermarking based on JEPG Quality SRIR is proposed. This method is mainly applied the SRIR to pre-process the original image. The correlation among pixels of Figure 3. PSNR with two techniques under different JPEG qualities original will be reduced. Meanwhile, encode the watermark with ( 3,1, 2 ) convolutional encoder and ( 3,1) repetition 1.2 encoder before embedding. It also contains the results of tests performed showing the high robustness of the algorithm 1 against the attacks of JPEG lossy compression and salt-and- pepper noise, multiplicative noise, center cutting. In addition, it NC 0.8 NC With SRIR should be mentioned that the method with SRIR is extremely 0.6 more robust against the attacks of JPEG lossy compression and NC Without salt-and-pepper noise than the traditional one. 0.4 SRIR REFERENCES 10 30 50 70 90 [1] Shenghe Sun, Zheming Lu, et al, “Digital watermarking technology and its application,” Beijing: Sciences Publishing House,2004. JEPG Quality [2] Podilchuk C I, Delp E J, “Digital Watermarking: Algorithms and Application,” IEEE Signal processing Magazines (S1053-5888), Figure 4. NC with two techniques under different JPEG qualities 2001,18(4): 33-46. 29.5 [3] Tipping M E, Bishop C M, “Bayesian Image Super-Resolution,” //Becker S,Thrun S,Oberrnayer K,eds. Advances in Neural Information 24.5 PSNR Processing Systems. Cambridge, USA:MIT Press, 2003,XVI:1279-1286. 19.5 PSNR With [4] Capel D P, “Image Mosaicing and Super-Resolution,” Cambridge, UK: SRIR University of-Oxford, 200l. 14.5 PSNR Without [5] Farsiu S, Robinson M D, et al, “Fast and Robust Multiframe Super- 0.01 0.03 0.05 0.07 0.09 0.11 SRIR Resolution,” IEEE Trans on Image Processing,2004,13(10):1327-1344. [6] Freeman W T , Pasztor E C , Carmiehael O T, “Learning h-Level Density of Salt-and-Pepper Noise Vision,” International Journal of Computer Vision,2000,40(1):25-47. [7] Freeman W T, Jones T R, et al, “Example-Based Super-Resolution,” Figure 5. PSNR with two techniques under salt-and-pepper noise IEEE Computer Graphics and Applications, 2002, 22(2): 56-65. [8] Donoho DL, Elad M, “Optimally Sparse Representation in General 1 (Nonorthogonal) Dictionaries via l2 Minimization,” Prec of the National Academy of Sciences, 2003, 100(5): 2197-2202. 0.95 [9] Mallat S G, et al, “Matching Pursuits with Time-Frequency Dictionaries,” IEEE Trans on Signal Processing. 1993,41(12): 3397- 3415. 0.9 [10] Pail Y C, Rezalifar R, Krishnaprasad P S, “Orthogonal Matching NC Pursuit: Recursive Function Approximation with Applications to NC With SRIR Wavelet Decomposition,” // Proc of the 27th Asilomar Conference on 0.85 Signals, Systems and Computers. Pacific Grove, USA, 1993: 40-44. NC Without [11] Chen S S, Donoho D L, Saunders M A, “Atomic Decomposition by 0.8 SRIR Basis Pursuit. SIAM Review,” 2001. 43(1): 129-159. [12] Gorodnitsky I F, Rao B D, “Sparse signal Reconstruction from Lim-Red 0.75 Data Using FOCUSS: A Re-Weighted Norm Minimization Algorithm,” IEEE Trans on Signal Processing, 1997, 45(3): 600-616. 0.01 0.03 0.05 0.07 0.09 0.11 [13] Aharon M, Elad M, Bruckstein A M, “The K—SVD: An Algorithm for Designing of Overcomplete Dictionaries for Sparse Representation,” Density of Salt-and-Pepper Noise IEEE Tram on Signal Processing, 2006, 54(11): 4311—4322. [14] Toma8i C, Manduchi R, “Bilateral Filtering for Gray and Color Images,” // Proc of the 6th 1EEE International Conference on Computer Figure 6. NC with two techniques under salt-and-pepper noise Vision. Bombay, India, 1998: 839—846. 1782