This work examines the effect of block sizes on attributes (robustness, capacity, time of watermarking, visibility and distortion) of watermarked digital images using Discrete Cosine Transform (DCT) function. The DCT function breaks up the image into various frequency bands and allows watermark data to be easily embedded. The advantage of this transformation is the ability to pack input image data into a few coefficients. The block size 8 x 8 is commonly used in watermarking. The work investigates the effect of using block sizes below and above 8 x 8 on the attributes of watermark. The attributes of robustness and capacity increase as the block size increases (62-70db, 31.5-35.9 bit/pixel). The time for watermarking reduces as the block size increases. The watermark is still visible for block sizes below 8 x 8 but invisible for those above it. Distortion decreases sharply from a high value at 2 x 2 block size to minimum at 8 x 8 and gradually increases with block size. The overall observation indicates that watermarked image gradually reduces in quality due to fading above 8 x 8 block size. For easy detection of image against piracy the block size 16 x 16 gives the best output result because it closely resembles the original image in terms of visual quality displayed despite the fact that it contains a hidden watermark.
BIG DATA-DRIVEN FAST REDUCING THE VISUAL BLOCK ARTIFACTS OF DCT COMPRESSED IM...IJDKP
The Urban Surveillance Systems generate huge amount of video and image data and impose high pressure
onto the recording disks. It is obvious that the research of video is a key point of big data research areas.
Since videos are composed of images, the degree and efficiency of image compression are of great
importance. Although the DCT based JPEG standard are widely used, it encounters insurmountable
problems. For instance, image encoding deficiencies such as block artifacts have to be removed frequently.
In this paper, we propose a new, simple but effective method to fast reduce the visual block artifacts of DCT
compressed images for urban surveillance systems. The simulation results demonstrate that our proposed
method achieves better quality than widely used filters while consuming much less computer CPU
resources.
Gaussian Fuzzy Blocking Artifacts Removal of High DCT Compressed Imagesijtsrd
A new artifact removal method as cascade of Gaussian fuzzy edge decider and fuzzy image correction is proposed. In this design, a highly compressed i.e. low bit rate image is considered. Here, each overlapped block of image is fed to a Gaussian fuzzy based decider to check whether the central pixel of image block needs correction. Hence, the central pixel of overlapped block is corrected by fuzzy gradient of its neighbors. Experimental results shows remarkable improvement with presented gFAR algorithm compared to the past methods subjectively visual quality and objectively PSNR . Deepak Gambhir "Gaussian Fuzzy Blocking Artifacts Removal of High DCT Compressed Images" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-6 , October 2020, URL: https://www.ijtsrd.com/papers/ijtsrd33361.pdf Paper Url: https://www.ijtsrd.com/computer-science/multimedia/33361/gaussian-fuzzy-blocking-artifacts-removal-of-high-dct-compressed-images/deepak-gambhir
We present a new image compression method to improve visual perception of the decompressed images and achieve higher image compression ratio. This method balances between the compression rate and image quality by compressing the essential parts of the image-edges. The key subject/edge is of more significance than background/non-edge image. Taking into consideration the value of image components and the effect of smoothness in image compression, this method classifies the image components as edge or non-edge. Low-quality lossy compression is applied to non-edge components whereas high-quality lossy compression is applied to edge components. Outcomes show that our suggested method is efficient in terms of compression ratio, bits per-pixel and peak signal to noise ratio.
AUTOMATIC THRESHOLDING TECHNIQUES FOR SAR IMAGEScscpconf
Segmentation of Synthetic Aperture Radar (SAR) images have a great use in observing the global environment, and in analysing the target detection and recognition .But , segmentation of (SAR) images is known as a very complex task, due to the existence of speckle noise. Therefore, in this paper we present a fast SAR images segmentation based on between class variance. Our choice for used (BCV) method, because it is one of the most effective thresholding techniques for most real world images with regard to uniformity and shape measures. Our experiments will be as a test to determine which technique is effective in thresholding (extraction) the oil spill for numerous SAR images, and in the future these thresholding
techniques can be very useful in detection objects in other SAR images
AUTOMATIC THRESHOLDING TECHNIQUES FOR SAR IMAGEScsitconf
Segmentation of Synthetic Aperture Radar (SAR) images have a great use in observing the
global environment, and in analysing the target detection and recognition .But , segmentation
of (SAR) images is known as a very complex task, due to the existence of speckle noise.
Therefore, in this paper we present a fast SAR images segmentation based on between class
variance. Our choice for used (BCV) method, because it is one of the most effective thresholding
techniques for most real world images with regard to uniformity and shape measures. Our
experiments will be as a test to determine which technique is effective in thresholding
(extraction) the oil spill for numerous SAR images, and in the future these thresholding
techniques can be very useful in detection objects in other SAR images
FORGERY (COPY-MOVE) DETECTION IN DIGITAL IMAGES USING BLOCK METHODeditorijcres
AKHILESH KUMAR YADAV, DEENBANDHU SINGH, VIVEK KUMAR
Department of Computer Science and Engineering
Babu Banarasi Das University, Lucknow
akhi2232232@gmail.com, deenbandhusingh85@gmail.com, vivek.kumar0091@gmail.com
ABSTRACT- Digital images can be easily modified using powerful image editing software. Determining whether a manipulation is innocent of sharpening from those which are malicious, such as removing or adding parts to an image is the topic of this paper. In this paper we focus on detection of a special type of forgery-the Copy-Move forgery, in this part of the original image is copied moved to desired location in the same image and pasted. The proposed method compress images using DWT (discrete wavelet transform) and divided into blocks and choose blocks than perform feature vector calculation and lexicographical sorting and duplicated blocks are identified after sorting. This method is good at some manipulation/attack likes scaling, rotation, Gaussian noise, smoothing, JPEG compression etc.
INDEX TERMS- Copy-Move forgery, Wavelet Transform, Lexicographical Sorting, Region Duplication Detection.
Segmentation by Fusion of Self-Adaptive SFCM Cluster in Multi-Color Space Com...CSCJournals
This paper proposes a new, simple, and efficient segmentation approach that could find diverse applications in pattern recognition as well as in computer vision, particularly in color image segmentation. First, we choose the best segmentation components among six different color spaces. Then, Histogram and SFCM techniques are applied for initialization of segmentation. Finally, we fuse the segmentation results and merge similar regions. Extensive experiments have been taken on Berkeley image database by using the proposed algorithm. The results show that, compared with some classical segmentation algorithms, such as Mean-Shift, FCR and CTM, etc, our method could yield reasonably good or better image partitioning, which illustrates practical value of the method.
BIG DATA-DRIVEN FAST REDUCING THE VISUAL BLOCK ARTIFACTS OF DCT COMPRESSED IM...IJDKP
The Urban Surveillance Systems generate huge amount of video and image data and impose high pressure
onto the recording disks. It is obvious that the research of video is a key point of big data research areas.
Since videos are composed of images, the degree and efficiency of image compression are of great
importance. Although the DCT based JPEG standard are widely used, it encounters insurmountable
problems. For instance, image encoding deficiencies such as block artifacts have to be removed frequently.
In this paper, we propose a new, simple but effective method to fast reduce the visual block artifacts of DCT
compressed images for urban surveillance systems. The simulation results demonstrate that our proposed
method achieves better quality than widely used filters while consuming much less computer CPU
resources.
Gaussian Fuzzy Blocking Artifacts Removal of High DCT Compressed Imagesijtsrd
A new artifact removal method as cascade of Gaussian fuzzy edge decider and fuzzy image correction is proposed. In this design, a highly compressed i.e. low bit rate image is considered. Here, each overlapped block of image is fed to a Gaussian fuzzy based decider to check whether the central pixel of image block needs correction. Hence, the central pixel of overlapped block is corrected by fuzzy gradient of its neighbors. Experimental results shows remarkable improvement with presented gFAR algorithm compared to the past methods subjectively visual quality and objectively PSNR . Deepak Gambhir "Gaussian Fuzzy Blocking Artifacts Removal of High DCT Compressed Images" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-6 , October 2020, URL: https://www.ijtsrd.com/papers/ijtsrd33361.pdf Paper Url: https://www.ijtsrd.com/computer-science/multimedia/33361/gaussian-fuzzy-blocking-artifacts-removal-of-high-dct-compressed-images/deepak-gambhir
We present a new image compression method to improve visual perception of the decompressed images and achieve higher image compression ratio. This method balances between the compression rate and image quality by compressing the essential parts of the image-edges. The key subject/edge is of more significance than background/non-edge image. Taking into consideration the value of image components and the effect of smoothness in image compression, this method classifies the image components as edge or non-edge. Low-quality lossy compression is applied to non-edge components whereas high-quality lossy compression is applied to edge components. Outcomes show that our suggested method is efficient in terms of compression ratio, bits per-pixel and peak signal to noise ratio.
AUTOMATIC THRESHOLDING TECHNIQUES FOR SAR IMAGEScscpconf
Segmentation of Synthetic Aperture Radar (SAR) images have a great use in observing the global environment, and in analysing the target detection and recognition .But , segmentation of (SAR) images is known as a very complex task, due to the existence of speckle noise. Therefore, in this paper we present a fast SAR images segmentation based on between class variance. Our choice for used (BCV) method, because it is one of the most effective thresholding techniques for most real world images with regard to uniformity and shape measures. Our experiments will be as a test to determine which technique is effective in thresholding (extraction) the oil spill for numerous SAR images, and in the future these thresholding
techniques can be very useful in detection objects in other SAR images
AUTOMATIC THRESHOLDING TECHNIQUES FOR SAR IMAGEScsitconf
Segmentation of Synthetic Aperture Radar (SAR) images have a great use in observing the
global environment, and in analysing the target detection and recognition .But , segmentation
of (SAR) images is known as a very complex task, due to the existence of speckle noise.
Therefore, in this paper we present a fast SAR images segmentation based on between class
variance. Our choice for used (BCV) method, because it is one of the most effective thresholding
techniques for most real world images with regard to uniformity and shape measures. Our
experiments will be as a test to determine which technique is effective in thresholding
(extraction) the oil spill for numerous SAR images, and in the future these thresholding
techniques can be very useful in detection objects in other SAR images
FORGERY (COPY-MOVE) DETECTION IN DIGITAL IMAGES USING BLOCK METHODeditorijcres
AKHILESH KUMAR YADAV, DEENBANDHU SINGH, VIVEK KUMAR
Department of Computer Science and Engineering
Babu Banarasi Das University, Lucknow
akhi2232232@gmail.com, deenbandhusingh85@gmail.com, vivek.kumar0091@gmail.com
ABSTRACT- Digital images can be easily modified using powerful image editing software. Determining whether a manipulation is innocent of sharpening from those which are malicious, such as removing or adding parts to an image is the topic of this paper. In this paper we focus on detection of a special type of forgery-the Copy-Move forgery, in this part of the original image is copied moved to desired location in the same image and pasted. The proposed method compress images using DWT (discrete wavelet transform) and divided into blocks and choose blocks than perform feature vector calculation and lexicographical sorting and duplicated blocks are identified after sorting. This method is good at some manipulation/attack likes scaling, rotation, Gaussian noise, smoothing, JPEG compression etc.
INDEX TERMS- Copy-Move forgery, Wavelet Transform, Lexicographical Sorting, Region Duplication Detection.
Segmentation by Fusion of Self-Adaptive SFCM Cluster in Multi-Color Space Com...CSCJournals
This paper proposes a new, simple, and efficient segmentation approach that could find diverse applications in pattern recognition as well as in computer vision, particularly in color image segmentation. First, we choose the best segmentation components among six different color spaces. Then, Histogram and SFCM techniques are applied for initialization of segmentation. Finally, we fuse the segmentation results and merge similar regions. Extensive experiments have been taken on Berkeley image database by using the proposed algorithm. The results show that, compared with some classical segmentation algorithms, such as Mean-Shift, FCR and CTM, etc, our method could yield reasonably good or better image partitioning, which illustrates practical value of the method.
SEGMENTATION USING ‘NEW’ TEXTURE FEATUREacijjournal
Color, texture, shape and luminance are the prominent features for image segmentation. Texture is an
organized group of spatial repetitive arrangements in an image and it is a vital attribute in many image
processing and computer vision applications. The objective of this work is to segment the texture sub
images from the given arbitrary image. The main contribution of this work is to introduce “NEW” texture
feature descriptor to the image segmentation field. The NEW texture descriptor labels the neighborhood
pixels of a pixel in an image as N,W,NW,NE,WW,NN and NNE(N-North, W-West).To find the prediction
value, the gradient of the intensity functions are calculated. Eight component binary vectors are formed
and compared to prediction value. Finally end up with 256 possible vectors. Fuzzy c-means clustering is
used to segment the similar regions in textural image Extensive experimentation shows that the proposed
methodology works better for segmenting the texture images, and also segmentation performance are
evaluated.
Novel DCT based watermarking scheme for digital imagesIDES Editor
There is an ever growing interest in copyright
protection of multimedia content, thus digital
watermarking techniques are widely practiced. Due to
the internet connectivity and digital libraries the
research interest of protecting digital content
watermarking is extensively researched. In this paper
we present a novel watermark generation scheme
based on the histogram of the image and apply it to the
original image in the transform(DCT) domain. Further
we study the performance of the watermark against
some common attacks that can take place with images.
Experimental results show that the embedded
watermark is imperceptible and image quality is not
degraded.
Copy-Rotate-Move Forgery Detection Based on Spatial DomainSondosFadl
we propose a method which is efficient and fast for detecting Copy-Move regions even when the copied region was undergone rotation modify in spatial domain.
VARIATION-FREE WATERMARKING TECHNIQUE BASED ON SCALE RELATIONSHIPcsandit
Most watermark methods use pixel values or coefficients as the judgment condition to embed or
extract a watermark image. The variation of these values may lead to the inaccurate condition
such that an incorrect judgment has been laid out. To avoid this problem, we design a stable
judgment mechanism, in which the outcome will not be seriously influenced by the variation.
The principle of judgment depends on the scale relationship of two pixels. From the observation
of common signal processing operations, we can find that the pixel value of processed image
usually keeps stable unless an image has been manipulated by cropping attack or halftone
transformation. This can greatly help reduce the modification strength from image processing
operations. Experiment results show that the proposed method can resist various attacks and
keep the image quality friendly.
FAN search for image copy-move forgery-amalta 2014SondosFadl
The proposed Fan Search (FS) algorithm starts once a duplicated block is detected. Instead of exhaustive search for all blocks,the nearby blocks of the detected block are examined first in a spiral order.
Secure High Capacity Data Hiding in Images using EDBTCijsrd.com
Block truncation coding is an efficient compression technique which has low computational complexity but it has two major issues like blocking and false counter effects .So here we have used error-diffused BTC that improves above deficiencies using visual low pass compensation on the bitmap. In this paper complementary hiding EDBTC is developed to resolve the above issue. In this project a single water mark is embedded and then multiple water marks are embedded .usually we use an adaptive external bias factor to embed the watermark but it damages the image quality and robustness .So here we use an extremely small bias factor to control the watermark embedding and this enables a high capacity scenario without significantly damaging image quality. Until now a few data hiding schemes are proposed but it damages the characteristics of BTC. The security of embedded water mark is high that it can’t be easy extracted by the malicious users. The watermark is encrypted by standard encryption algorithm and then it is embedded.
Performance Comparison of Image Retrieval Using Fractional Coefficients of Tr...CSCJournals
The thirst of better and faster retrieval techniques has always fuelled to the research in content based image retrieval (CBIR). The paper presents innovative content based image retrieval (CBIR) techniques based on feature vectors as fractional coefficients of transformed images using Discrete Cosine, Walsh, Haar and Kekre’s transforms. Here the advantage of energy compaction of transforms in higher coefficients is taken to greatly reduce the feature vector size per image by taking fractional coefficients of transformed image. The feature vectors are extracted in fourteen different ways from the transformed image, with the first being considering all the coefficients of transformed image and then fourteen reduced coefficients sets (as 50%, 25%, 12.5%, 6.25%, 3.125%, 1.5625% ,0.7813%, 0.39%, 0.195%, 0.097%, 0.048%, 0.024%, 0.012% and 0.06% of complete transformed image) are considered as feature vectors. The four transforms are applied on gray image equivalents and the colour components of images to extract Gray and RGB feature sets respectively. Instead of using all coefficients of transformed images as feature vector for image retrieval, these fourteen reduced coefficients sets for gray as well as RGB feature vectors are used, resulting into better performance and lower computations. The proposed CBIR techniques are implemented on a database having 1000 images spread across 11 categories. For each proposed CBIR technique 55 queries (5 per category) are fired on the database and net average precision and recall are computed for all feature sets per transform. The results have shown performance improvement (higher precision and recall values) with fractional coefficients compared to complete transform of image at reduced computations resulting in faster retrieval. Finally Kekre’s transform surpasses all other discussed transforms in performance with highest precision and recall values for fractional coefficients (6.25% and 3.125% of all coefficients) and computation are lowered by 94.08% as compared to DCT.
Improved anti-noise attack ability of image encryption algorithm using de-noi...TELKOMNIKA JOURNAL
Information security is considered as one of the important issues in the information age used to preserve the secret information through out transmissions in practical applications. With regard to image encryption, a lot of schemes related to information security were applied. Such approaches might be categorized into 2 domains; domain frequency and domain spatial. The presented work develops an encryption technique on the basis of conventional watermarking system with the use of singular value decomposition (SVD), discrete cosine transform (DCT), and discrete wavelet transform (DWT) together, the suggested DWT-DCT-SVD method has high robustness in comparison to the other conventional approaches and enhanced approach for having high robustness against Gaussian noise attacks with using denoising approach according to DWT. MSE in addition to the peak signal-to-noise ratio (PSNR) specified the performance measures which are the base of this study’s results, as they are showing that the algorithm utilized in this study has high robustness against Gaussian noise attacks.
GREY LEVEL CO-OCCURRENCE MATRICES: GENERALISATION AND SOME NEW FEATURESijcseit
Grey Level Co-occurrence Matrices (GLCM) are one of the earliest techniques used for image texture
analysis. In this paper we defined a new feature called trace extracted from the GLCM and its implications
in texture analysis are discussed in the context of Content Based Image Retrieval (CBIR). The theoretical
extension of GLCM to n-dimensional gray scale images are also discussed. The results indicate that trace
features outperform Haralick features when applied to CBIR.
International Journal of Engineering Research and Applications (IJERA) aims to cover the latest outstanding developments in the field of all Engineering Technologies & science.
International Journal of Engineering Research and Applications (IJERA) is a team of researchers not publication services or private publications running the journals for monetary benefits, we are association of scientists and academia who focus only on supporting authors who want to publish their work. The articles published in our journal can be accessed online, all the articles will be archived for real time access.
Our journal system primarily aims to bring out the research talent and the works done by sciaentists, academia, engineers, practitioners, scholars, post graduate students of engineering and science. This journal aims to cover the scientific research in a broader sense and not publishing a niche area of research facilitating researchers from various verticals to publish their papers. It is also aimed to provide a platform for the researchers to publish in a shorter of time, enabling them to continue further All articles published are freely available to scientific researchers in the Government agencies,educators and the general public. We are taking serious efforts to promote our journal across the globe in various ways, we are sure that our journal will act as a scientific platform for all researchers to publish their works online.
Evaluation of graphic effects embedded image compression IJECEIAES
A fundamental factor of digital image compression is the conversion processes. The intention of this process is to understand the shape of an image and to modify the digital image to a grayscale configuration where the encoding of the compression technique is operational. This article focuses on an investigation of compression algorithms for images with artistic effects. A key component in image compression is how to effectively preserve the original quality of images. Image compression is to condense by lessening the redundant data of images in order that they are transformed cost-effectively. The common techniques include discrete cosine transform (DCT), fast Fourier transform (FFT), and shifted FFT (SFFT). Experimental results point out compression ratio between original RGB images and grayscale images, as well as comparison. The superior algorithm improving a shape comprehension for images with grahic effect is SFFT technique.
Denoising Process Based on Arbitrarily Shaped WindowsCSCJournals
Many factors, such as moving objects, introduce noise in digital images. The presence of noise affects image quality. The image denoising process works on reconstructing a noiseless image and improving its quality. When an image has an additive white Gaussian noise (AWGN) then denoising becomes a challenging process. In our research, we present an improved algorithm for image denoising in the wavelet domain. Homogenous regions for an input image are estimated using a region merging algorithm. The local variance and wavelet shrinkage algorithm are applied to denoise each image patch. Experimental results based on peak signal to noise ratio (PSNR) measurements showed that our algorithm provided better results compared with a denoising algorithm based on a minimum mean square error (MMSE) estimator.
Region duplication forgery detection in digital imagesRupesh Ambatwad
Region duplication or copy move forgery is a common type of tampering scheme carried out to create a fake image. The field on blind image forensics depends upon the authenticity of the digital image. As in copy move forgery the duplicated region belongs to the same image, the detection of tampering is complex as it does not leave a visual clue. But the tampering gives rise to glitches at pixel level
PIPELINED ARCHITECTURE OF 2D-DCT, QUANTIZATION AND ZIGZAG PROCESS FOR JPEG IM...VLSICS Design
This paper presents the architecture and VHDL design of a Two Dimensional Discrete Cosine Transform (2D-DCT) with Quantization and zigzag arrangement. This architecture is used as the core and path in JPEG image compression hardware. The 2D- DCT calculation is made using the 2D- DCT Separability property, such that the whole architecture is divided into two 1D-DCT calculations by using a transpose buffer. Architecture for Quantization and zigzag process is also described in this paper. The quantization process is done using division operation. This design aimed to be implemented in Spartan-3E XC3S500 FPGA. The 2D- DCT architecture uses 1891 Slices, 51I/O pins, and 8 multipliers of one Xilinx Spartan-3E XC3S500E FPGA reaches an operating frequency of 101.35 MHz One input block with 8 x 8 elements of 8 bits each is processed in 6604 ns and pipeline latency is 140 clock cycles .
Pipelined Architecture of 2D-DCT, Quantization and ZigZag Process for JPEG Im...VLSICS Design
This paper presents the architecture and VHDL design of a Two Dimensional Discrete Cosine Transform (2D-DCT) with Quantization and zigzag arrangement. This architecture is used as the core and path in JPEG image compression hardware. The 2D- DCT calculation is made using the 2D- DCT Separability property, such that the whole architecture is divided into two 1D-DCT calculations by using a transpose buffer. Architecture for Quantization and zigzag process is also described in this paper. The quantization process is done using division operation. This design aimed to be implemented in Spartan-3E XC3S500 FPGA. The 2D- DCT architecture uses 1891 Slices, 51I/O pins, and 8 multipliers of one Xilinx Spartan-3E XC3S500E FPGA reaches an operating frequency of 101.35 MHz One input block with 8 x 8 elements of 8 bits each is processed in 6604 ns and pipeline latency is 140 clock cycles.
Enhancement of genetic image watermarking robust against cropping attackijfcstjournal
The enhancement of image watermarking algorithm robust against particular attack by using genetic
algorithm is presented here. There is a trade-off between imperceptibility and robustness in image
watermarking. To preserve both of these characteristics in digital image watermarking in a logical value,
the genetic algorithm is used. Some factors were introduced for providing robustness of image
watermarking against cropping attack such as the Centre of Interest Proximity Factor (CIPF), the
Complexity Factor (CF) and the Priority Coefficient (PC).
SEGMENTATION USING ‘NEW’ TEXTURE FEATUREacijjournal
Color, texture, shape and luminance are the prominent features for image segmentation. Texture is an
organized group of spatial repetitive arrangements in an image and it is a vital attribute in many image
processing and computer vision applications. The objective of this work is to segment the texture sub
images from the given arbitrary image. The main contribution of this work is to introduce “NEW” texture
feature descriptor to the image segmentation field. The NEW texture descriptor labels the neighborhood
pixels of a pixel in an image as N,W,NW,NE,WW,NN and NNE(N-North, W-West).To find the prediction
value, the gradient of the intensity functions are calculated. Eight component binary vectors are formed
and compared to prediction value. Finally end up with 256 possible vectors. Fuzzy c-means clustering is
used to segment the similar regions in textural image Extensive experimentation shows that the proposed
methodology works better for segmenting the texture images, and also segmentation performance are
evaluated.
Novel DCT based watermarking scheme for digital imagesIDES Editor
There is an ever growing interest in copyright
protection of multimedia content, thus digital
watermarking techniques are widely practiced. Due to
the internet connectivity and digital libraries the
research interest of protecting digital content
watermarking is extensively researched. In this paper
we present a novel watermark generation scheme
based on the histogram of the image and apply it to the
original image in the transform(DCT) domain. Further
we study the performance of the watermark against
some common attacks that can take place with images.
Experimental results show that the embedded
watermark is imperceptible and image quality is not
degraded.
Copy-Rotate-Move Forgery Detection Based on Spatial DomainSondosFadl
we propose a method which is efficient and fast for detecting Copy-Move regions even when the copied region was undergone rotation modify in spatial domain.
VARIATION-FREE WATERMARKING TECHNIQUE BASED ON SCALE RELATIONSHIPcsandit
Most watermark methods use pixel values or coefficients as the judgment condition to embed or
extract a watermark image. The variation of these values may lead to the inaccurate condition
such that an incorrect judgment has been laid out. To avoid this problem, we design a stable
judgment mechanism, in which the outcome will not be seriously influenced by the variation.
The principle of judgment depends on the scale relationship of two pixels. From the observation
of common signal processing operations, we can find that the pixel value of processed image
usually keeps stable unless an image has been manipulated by cropping attack or halftone
transformation. This can greatly help reduce the modification strength from image processing
operations. Experiment results show that the proposed method can resist various attacks and
keep the image quality friendly.
FAN search for image copy-move forgery-amalta 2014SondosFadl
The proposed Fan Search (FS) algorithm starts once a duplicated block is detected. Instead of exhaustive search for all blocks,the nearby blocks of the detected block are examined first in a spiral order.
Secure High Capacity Data Hiding in Images using EDBTCijsrd.com
Block truncation coding is an efficient compression technique which has low computational complexity but it has two major issues like blocking and false counter effects .So here we have used error-diffused BTC that improves above deficiencies using visual low pass compensation on the bitmap. In this paper complementary hiding EDBTC is developed to resolve the above issue. In this project a single water mark is embedded and then multiple water marks are embedded .usually we use an adaptive external bias factor to embed the watermark but it damages the image quality and robustness .So here we use an extremely small bias factor to control the watermark embedding and this enables a high capacity scenario without significantly damaging image quality. Until now a few data hiding schemes are proposed but it damages the characteristics of BTC. The security of embedded water mark is high that it can’t be easy extracted by the malicious users. The watermark is encrypted by standard encryption algorithm and then it is embedded.
Performance Comparison of Image Retrieval Using Fractional Coefficients of Tr...CSCJournals
The thirst of better and faster retrieval techniques has always fuelled to the research in content based image retrieval (CBIR). The paper presents innovative content based image retrieval (CBIR) techniques based on feature vectors as fractional coefficients of transformed images using Discrete Cosine, Walsh, Haar and Kekre’s transforms. Here the advantage of energy compaction of transforms in higher coefficients is taken to greatly reduce the feature vector size per image by taking fractional coefficients of transformed image. The feature vectors are extracted in fourteen different ways from the transformed image, with the first being considering all the coefficients of transformed image and then fourteen reduced coefficients sets (as 50%, 25%, 12.5%, 6.25%, 3.125%, 1.5625% ,0.7813%, 0.39%, 0.195%, 0.097%, 0.048%, 0.024%, 0.012% and 0.06% of complete transformed image) are considered as feature vectors. The four transforms are applied on gray image equivalents and the colour components of images to extract Gray and RGB feature sets respectively. Instead of using all coefficients of transformed images as feature vector for image retrieval, these fourteen reduced coefficients sets for gray as well as RGB feature vectors are used, resulting into better performance and lower computations. The proposed CBIR techniques are implemented on a database having 1000 images spread across 11 categories. For each proposed CBIR technique 55 queries (5 per category) are fired on the database and net average precision and recall are computed for all feature sets per transform. The results have shown performance improvement (higher precision and recall values) with fractional coefficients compared to complete transform of image at reduced computations resulting in faster retrieval. Finally Kekre’s transform surpasses all other discussed transforms in performance with highest precision and recall values for fractional coefficients (6.25% and 3.125% of all coefficients) and computation are lowered by 94.08% as compared to DCT.
Improved anti-noise attack ability of image encryption algorithm using de-noi...TELKOMNIKA JOURNAL
Information security is considered as one of the important issues in the information age used to preserve the secret information through out transmissions in practical applications. With regard to image encryption, a lot of schemes related to information security were applied. Such approaches might be categorized into 2 domains; domain frequency and domain spatial. The presented work develops an encryption technique on the basis of conventional watermarking system with the use of singular value decomposition (SVD), discrete cosine transform (DCT), and discrete wavelet transform (DWT) together, the suggested DWT-DCT-SVD method has high robustness in comparison to the other conventional approaches and enhanced approach for having high robustness against Gaussian noise attacks with using denoising approach according to DWT. MSE in addition to the peak signal-to-noise ratio (PSNR) specified the performance measures which are the base of this study’s results, as they are showing that the algorithm utilized in this study has high robustness against Gaussian noise attacks.
GREY LEVEL CO-OCCURRENCE MATRICES: GENERALISATION AND SOME NEW FEATURESijcseit
Grey Level Co-occurrence Matrices (GLCM) are one of the earliest techniques used for image texture
analysis. In this paper we defined a new feature called trace extracted from the GLCM and its implications
in texture analysis are discussed in the context of Content Based Image Retrieval (CBIR). The theoretical
extension of GLCM to n-dimensional gray scale images are also discussed. The results indicate that trace
features outperform Haralick features when applied to CBIR.
International Journal of Engineering Research and Applications (IJERA) aims to cover the latest outstanding developments in the field of all Engineering Technologies & science.
International Journal of Engineering Research and Applications (IJERA) is a team of researchers not publication services or private publications running the journals for monetary benefits, we are association of scientists and academia who focus only on supporting authors who want to publish their work. The articles published in our journal can be accessed online, all the articles will be archived for real time access.
Our journal system primarily aims to bring out the research talent and the works done by sciaentists, academia, engineers, practitioners, scholars, post graduate students of engineering and science. This journal aims to cover the scientific research in a broader sense and not publishing a niche area of research facilitating researchers from various verticals to publish their papers. It is also aimed to provide a platform for the researchers to publish in a shorter of time, enabling them to continue further All articles published are freely available to scientific researchers in the Government agencies,educators and the general public. We are taking serious efforts to promote our journal across the globe in various ways, we are sure that our journal will act as a scientific platform for all researchers to publish their works online.
Evaluation of graphic effects embedded image compression IJECEIAES
A fundamental factor of digital image compression is the conversion processes. The intention of this process is to understand the shape of an image and to modify the digital image to a grayscale configuration where the encoding of the compression technique is operational. This article focuses on an investigation of compression algorithms for images with artistic effects. A key component in image compression is how to effectively preserve the original quality of images. Image compression is to condense by lessening the redundant data of images in order that they are transformed cost-effectively. The common techniques include discrete cosine transform (DCT), fast Fourier transform (FFT), and shifted FFT (SFFT). Experimental results point out compression ratio between original RGB images and grayscale images, as well as comparison. The superior algorithm improving a shape comprehension for images with grahic effect is SFFT technique.
Denoising Process Based on Arbitrarily Shaped WindowsCSCJournals
Many factors, such as moving objects, introduce noise in digital images. The presence of noise affects image quality. The image denoising process works on reconstructing a noiseless image and improving its quality. When an image has an additive white Gaussian noise (AWGN) then denoising becomes a challenging process. In our research, we present an improved algorithm for image denoising in the wavelet domain. Homogenous regions for an input image are estimated using a region merging algorithm. The local variance and wavelet shrinkage algorithm are applied to denoise each image patch. Experimental results based on peak signal to noise ratio (PSNR) measurements showed that our algorithm provided better results compared with a denoising algorithm based on a minimum mean square error (MMSE) estimator.
Region duplication forgery detection in digital imagesRupesh Ambatwad
Region duplication or copy move forgery is a common type of tampering scheme carried out to create a fake image. The field on blind image forensics depends upon the authenticity of the digital image. As in copy move forgery the duplicated region belongs to the same image, the detection of tampering is complex as it does not leave a visual clue. But the tampering gives rise to glitches at pixel level
PIPELINED ARCHITECTURE OF 2D-DCT, QUANTIZATION AND ZIGZAG PROCESS FOR JPEG IM...VLSICS Design
This paper presents the architecture and VHDL design of a Two Dimensional Discrete Cosine Transform (2D-DCT) with Quantization and zigzag arrangement. This architecture is used as the core and path in JPEG image compression hardware. The 2D- DCT calculation is made using the 2D- DCT Separability property, such that the whole architecture is divided into two 1D-DCT calculations by using a transpose buffer. Architecture for Quantization and zigzag process is also described in this paper. The quantization process is done using division operation. This design aimed to be implemented in Spartan-3E XC3S500 FPGA. The 2D- DCT architecture uses 1891 Slices, 51I/O pins, and 8 multipliers of one Xilinx Spartan-3E XC3S500E FPGA reaches an operating frequency of 101.35 MHz One input block with 8 x 8 elements of 8 bits each is processed in 6604 ns and pipeline latency is 140 clock cycles .
Pipelined Architecture of 2D-DCT, Quantization and ZigZag Process for JPEG Im...VLSICS Design
This paper presents the architecture and VHDL design of a Two Dimensional Discrete Cosine Transform (2D-DCT) with Quantization and zigzag arrangement. This architecture is used as the core and path in JPEG image compression hardware. The 2D- DCT calculation is made using the 2D- DCT Separability property, such that the whole architecture is divided into two 1D-DCT calculations by using a transpose buffer. Architecture for Quantization and zigzag process is also described in this paper. The quantization process is done using division operation. This design aimed to be implemented in Spartan-3E XC3S500 FPGA. The 2D- DCT architecture uses 1891 Slices, 51I/O pins, and 8 multipliers of one Xilinx Spartan-3E XC3S500E FPGA reaches an operating frequency of 101.35 MHz One input block with 8 x 8 elements of 8 bits each is processed in 6604 ns and pipeline latency is 140 clock cycles.
Enhancement of genetic image watermarking robust against cropping attackijfcstjournal
The enhancement of image watermarking algorithm robust against particular attack by using genetic
algorithm is presented here. There is a trade-off between imperceptibility and robustness in image
watermarking. To preserve both of these characteristics in digital image watermarking in a logical value,
the genetic algorithm is used. Some factors were introduced for providing robustness of image
watermarking against cropping attack such as the Centre of Interest Proximity Factor (CIPF), the
Complexity Factor (CF) and the Priority Coefficient (PC).
Robust Digital Watermarking Scheme of Anaglyphic 3D for RGB Color ImagesCSCJournals
In this paper, digital watermarking technique using spread spectrum (SS) technology and adaptive DM (dither modulation) with the improved Watson perception model are applied for copyright protection of anaglyphic 3D images. The improved Watson perception model can well solve the problem that the slack do not change linearly as the amplitude scale. Experimental results show that the watermarking schemes provide resistance to Gaussian noise, salt and pepper noise, JPEG compression, constant luminance change and valumetric scaling; the scheme employing improved Watson perception model is better than the one using unimproved Watson perception model. Compared experiments with the works [4] and [19] were also carried out in experiments. On the other hand, the approach is not sensitive to the JPEG compression while the other based on QIM is not sensitive to constant luminance change and valumetric scaling.
SECURED COLOR IMAGE WATERMARKING TECHNIQUE IN DWT-DCT DOMAIN ijcseit
The multilayer secured DWT-DCT and YIQ color space based image watermarking technique with
robustness and better correlation is presented here. The security levels are increased by using multiple pn
sequences, Arnold scrambling, DWT domain, DCT domain and color space conversions. Peak signal to
noise ratio and Normalized correlations are used as measurement metrics. The 512x512 sized color images
with different histograms are used for testing and watermark of size 64x64 is embedded in HL region of
DWT and 4x4 DCT is used. ‘Haar’ wavelet is used for decomposition and direct flexing factor is used. We
got PSNR value is 63.9988 for flexing factor k=1 for Lena image and the maximum NC 0.9781 for flexing
factor k=4 in Q color space. The comparative performance in Y, I and Q color space is presented. The
technique is robust for different attacks like scaling, compression, rotation etc.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
A Review on Image Compression using DCT and DWTIJSRD
Image Compression addresses the matter of reducing the amount of data needed to represent the digital image. There are several transformation techniques used for data compression. Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) is mostly used transformation. The Discrete cosine transform (DCT) is a method for transform an image from spatial domain to frequency domain. DCT has high energy compaction property and requires less computational resources. On the other hand, DWT is multi resolution transformation. The research paper includes various approaches that have been used by different researchers for Image Compression. The analysis has been carried out in terms of performance parameters Peak signal to noise ratio, Bit error rate, Compression ratio, Mean square error. and time taken for decomposition and reconstruction.
FINGERPRINTS IMAGE COMPRESSION BY WAVE ATOMScsandit
The fingerprint images compression based on geometric transformed presents important
research topic, these last year’s many transforms have been proposed to give the best
representation to a particular type of image “fingerprint image”, like classics wavelets and
wave atoms. In this paper we shall present a comparative study between this transforms, in
order to use them in compression. The results show that for fingerprint images, the wave atom
offers better performance than the current transform based compression standard. The wave
atoms transformation brings a considerable contribution on the compression of fingerprints
images by achieving high values of ratios compression and PSNR, with a reduced number of
coefficients. In addition, the proposed method is verified with objective and subjective testing.
FINGERPRINTS IMAGE COMPRESSION BY WAVE ATOMScsandit
The fingerprint images compression based on geometric transformed presents important
research topic, these last year’s many transforms have been proposed to give the best
representation to a particular type of image “fingerprint image”, like classics wavelets and
wave atoms. In this paper we shall present a comparative study between this transforms, in
order to use them in compression. The results show that for fingerprint images, the wave atom
offers better performance than the current transform based compression standard. The wave
atoms transformation brings a considerable contribution on the compression of fingerprints
images by achieving high values of ratios compression and PSNR, with a reduced number of
coefficients. In addition, the proposed method is verified with objective and subjective testing.
Color image compression based on spatial and magnitude signal decomposition IJECEIAES
In this paper, a simple color image compression system has been proposed using image signal decomposition. Where, the RGB image color band is converted to the less correlated YUV color model and the pixel value (magnitude) in each band is decomposed into 2-values; most and least significant. According to the importance of the most significant value (MSV) that influenced by any simply modification happened, an adaptive lossless image compression system is proposed using bit plane (BP) slicing, delta pulse code modulation (Delta PCM), adaptive quadtree (QT) partitioning followed by an adaptive shift encoder. On the other hand, a lossy compression system is introduced to handle the least significant value (LSV), it is based on an adaptive, error bounded coding system, and it uses the DCT compression scheme. The performance of the developed compression system was analyzed and compared with those attained from the universal standard JPEG, and the results of applying the proposed system indicated its performance is comparable or better than that of the JPEG standards.
A novel rrw framework to resist accidental attackseSAT Journals
Abstract Robust reversible watermarking (RRW) methods are popular in multimedia for protecting copyright, while preserving intactness of host images and providing robustness against unintentional attacks. Robust reversible watermarking (RRW) is used to protect the copyrights and providing robustness against unintentional attacks. The past histogram rotation-based methods suffer from extremely poor invisibility for watermarked images and limited robustness in extracting watermarks from the watermarked images destroyed by unintentional attacks. This paper proposes a wavelet-domain statistical quantity histogram shifting and clustering (WSQH-SC) method and Enhanced pixel-wise masking (EPWM). This method embeds a new watermark image and extraction procedures by histogram shifting and clustering, which are important for improving robustness and reducing run-time complexity. It is possible reversibility and invisibility. By using WSQH-SC methods reversibility, invisibility of watermarks can be achieved. The experimental results show the comprehensive performance in terms of reversibility, robustness, invisibility, capacity and run-time complexity widely applicable to different kinds of images. Keywords: — Integer wavelet transform, k-means clustering, masking, robust reversible watermarking (RRW)
Similar to Effect of Block Sizes on the Attributes of Watermarking Digital Images (20)
Wearables are small electronic devices, often comprising one or more sensors and having computational capability. Devices such as wrist watches, pens, and glasses with installed cameras are now available at cheap prices for user to purchase to monitor or securing themselves. The Nigerian state at this period is faced with a lot of kidnapping activities in schools, homes and abduction for the purpose of ransomed collection and other illegal activities necessitate these reviews. The success of the wearable technology in medical uses prompted the research into application into security uses. The method of research is the use of case studies and literature search. This paper takes a look at the possible applications of the wearable technology to combat the cases of abduction and kidnapping in Nigeria.
A REVIEW OF APPLICATIONS OF THEORY OF COMPUTATION AND AUTOMATA TO MUSICDr. Michael Agbaje
Theory of Computation and Automata is a theoretical branch of computer science. It established its roots during 20th Century when mathematicians began developing theoretically and literally machines which mimic certain features of man, completing calculations more quickly and reliably. The word automaton is closely related to the word "automation", meaning automatic processes carrying out the production of specific processes. Automata theory deals with the logic of computation with respect to simple machines, referred to as automata. Through automata, computer scientists are able to understand how machines compute functions and solve problems and more importantly, what it means for a function to be defined as computable or for a question to be described as decidable (Stanford(2004),Cristopher(2013))
A software Process model is a standardised format for planning, organising, and runninga new software development project. The need to complete and deliver software projects faster require using a suitable model.
There are several different kind of models being used which have evolved over the years, in this paper we carried out survey on the following main types of model; waterfall model, V-model, Component assembly model, Chaos model, Incremental model, Prototyping model, Spiral model, Rapid application development (RAD) model, Agile model, rational unified process (RUP), Iconix process and Software ecosystem (SECO) model by describing their characteristic features.
We concluded the study by listing the strengths and weaknesses of each of this models.
Ethics has to do with moral principles that control or influence a person’s behavior. Research ethics has taken a prime position in the process of research. Digital watermarking as a technology that embeds information, in machine-readable form, within the content of a digital media file could raise privacy issues if deployed in ways that fail to take privacy into account. Digital watermarking can be applied to different applications including digital signatures, fingerprinting, broadcast and publication monitoring, authentication, copy control, and secret communication. This paper brings to view various ethical concerns of digital watermarking such as privacy, piracy, deception and anonymity.
This paper illustrates that cellular automata can drive
the spread of gossips in our environment using
stochastic model. The spread of rumour is
encouraged by the use of homogeneous cells which
can either be allowed or disallowed which is further
influenced by a parity model in modelling physical
science which comprises four cells signifying the
white cell as never heard and the black cell as having
heard. This generates a rule which determines if a
cell lives or dies. We can observe that if the cell is
white, and it has one or more black neighbours,
consider each black neighbour in turn. For each black
neighbour, change to black with some specific
probability, otherwise remain white. While on the
other hand, once a cell becomes black, the cell
remains black
PARASITIC COMPUTING: PROBLEMS AND ETHICAL CONSIDERATIONDr. Michael Agbaje
Parasitic computing is programming technique where a program in normal authorized interactions with another program manages to get the other program to perform computations of a complex nature. It is, in a sense, a security exploit in that the program implementing the parasitic computing has no authority to consume resources made available to the other program.The paper takes a look at the ethical issues of parasitic computing and suggest a look into the current operation of the internet TCP/IP.
— Short Message Service (SMS) is the text communication service component of phone, web or mobile communication systems, using standardized communications protocols that allow the exchange of short text messages between fixed line or mobile phone devices. The usage SMS as data application in the world is enormous, with 2.4 billion active users, or 74% of all mobile phone subscribers. This paper develops an SMS voting system that can be used in conducting a trustworthy and generally acceptable electoral conduct based on the legislation of a particular country. It is base on a level structure and a national SIM card module used for only electoral process. The SIM card can be used for either the Internet voting system or the SMS voting. The method is cheap and fast and guarantees prompt election result.
Broadcast monitoring is the process of tracking and observing activities on broadcasting channels in compliance with intellectual property rights and other illegal activities not conforming to broadcasting laws using the computer or human system. The problem here has a unique challenge from the pattern recognition point of view because a very high recognition rate is needed under non- ideal conditions. There is also a problem in comparing a small audio sequence with a large audio stream (the broadcast) searching for matches. Broadcast monitoring could be active or passive. In this paper we did a review of the various application and techniques useful to broadcast monitoring systems.
A 3-dimensional motion sensor and tracking system using vector analysis methodDr. Michael Agbaje
The importance of security in today’s world cannot be overemphasized. It is therefore, imperative to have a cheaper and lower resource requiring methods of achieving good security. Leaving home, work place, a child or somewhere important that needs supervision unattended is now a days not safe. There should be a form of supervision or surveillance either in person or by a machine. This research aimed at designing an electronic based device to provide such surveillance by sensing and tracking a single object in 3-Dimensional (3-D) motion within a defined space. Peering into the natural world it is found that animals such as dolphins, owls and bats locate objects and prey using sound, some animals even sense distortions in electromagnetic waves. Inspiration is not only garnered from nature but also from current technologies for this research. Three electronic sensors namely passive infra-red, active infrared and ultrasonic are evaluated and considered for designing a 3-D motion tracking system using mathematical method of vector analysis. An extensive look into the workings and features of all thethree sensors was done in order to find which sensor holds the greatest capacity for sensing an objects motion and obtaining the relative position of an object by the sensor in 3-dimensional space. Once achieved, the process of measurement can be done repeatedly at distinct time intervals to track the motion of the object in 3D space. The major benefit of this research is for the security and surveillance of property or personnel but is not limited to this. There is also an opportunity for applications in other areas such as in sports for tracking the 3D motion of an athlete and can also assist in evaluating player statistics such as speed and acceleration.
The measure of computerized information being created and put away is expanding at a disturbing rate. This information is classified and handled to distil and convey data to clients crossing various businesses for example, finance, online networking, gaming and so forth. This class of workloads is alluded to as throughput computing applications. Multi-core CPUs have been viewed as reasonable for handling information in such workloads. Be that as it may, energized by high computational throughput and energy proficiency, there has been a fast reception of Graphics Processing Units (GPUs) as computing engines lately. GPU computing has risen lately as a reasonable execution stage for throughput situated applications or regions of code. GPUs began as free units for program execution however there are clear patterns towards tight-sew CPU-GPU integration. In this paper, we look to comprehend cutting edge Heterogeneous System Architecture and inspect a few key segments that influences it to emerge from other architecture designs by analyzing existing inquiries about, articles and reports bearing and future open doors for HSA systems.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Effect of Block Sizes on the Attributes of Watermarking Digital Images
1. ISSN 1749-3889 (print), 1749-3897 (online)
International Journal of Nonlinear Science
Vol.9(2010) No.3,pp.358-366
Effect of Block Sizes on the Attributes of Watermarking Digital Images
M. O. Agbaje 1
, A. T. Akinwale1
, O. Folorunso 1
, A. N. Njah 2 ∗
1
Department of Computer Science, University of Agriculture, Abeokuta, Nigeria.
2
Department of Physics, University of Agriculture, Abeokuta, Nigeria.
(Received 22 July 2008, accepted 11 March 2009)
Abstract: This work examines the effect of block sizes on attributes (robustness, capacity, time of water-
marking, visibility and distortion ) of watermarked digital images using Discrete Cosine Transform (DCT)
function. The DCT function breaks up the image into various frequency bands and allows watermark data
to be easily embedded. The advantage of this transformation is the ability to pack input image data into a
few coefficients. The block size 8 x 8 is commonly used in watermarking. The work investigates the ef-
fect of using block sizes below and above 8 x 8 on the attributes of watermark. The attributes of robustness
and capacity increase as the block size increases (62-70db, 31.5-35.9 bit /pixel). The time for watermarking
reduces as the block size increases. The watermark is still visible for block sizes below 8 x 8 but invisible
for those above it. Distortion decreases sharply from a high value at 2 x 2 block size to minimum at 8 x 8
and gradually increases with block size. The overall observation indicates that watermarked image gradually
reduces in quality due to fading above 8 x 8 block size. For easy detection of image against piracy the block
size 16 x 16 gives the best output result because it closely resembles the original image in terms of visual
quality displayed despite the fact that it contains a hidden watermark.
Keywords: discrete cosine transform; watermark attributes; multimedia; block sizes
1 Introduction
Watermarking is one of the multimedia (multiple form of media- text, graphics, images, animation, audio, video that are
used together in the same application) security techniques[1-11]. It provides the state- of- arts technologies in the multime-
dia security areas. Digital watermarking is based on the science of steganography or data hiding [12,13]. Steganography
comes from Greek meaning ‘covered writing’. It is an area of research for communicating in a hidden manner. The
technique of chaos is also applicable in watermarking [14-16].
Koch et al [17] introduced the first efficient watermarking. In their method, the image is first divided into square
blocks of size 8 x 8 for DCT computation. A pair of mid-Frequency coefficients is chosen for modification from 12
pre-determined pairs.
Bors and Pitas [18] developed a mode that modifies the DCT coefficients satisfying a block site selection constraint.
After dividing the image into blocks of size 8x8, certain blocks are selected based on a Gaussian network classifier
decision. The middle range frequency DCT coefficients are then modified using either a linear DCT constraint or a
circular DCT detection region. Cox, et al [19] developed the first frequency domain-watermarking scheme. Other works
on watermarking via DCT are found by Suhail et al [20] and Nick Kingbury [21] used the two-dimensional DCT defined
by the expression
𝑔(𝑥, 𝑦, 0, 0) = 1/𝑁
𝑔(𝑥, 𝑦, 𝑢, 𝑣)= 1
2
𝑁3
[cos(2𝑥 + 1)𝑢𝜋] [cos(2𝑦 + 1)𝑣𝜋]
𝑓𝑜𝑟𝑥, 𝑦 = 0, 1, . . . , 𝑁 − 1
𝑢, 𝑣 = 1, 2, . . . , 𝑁 − 1
∗Corresponding author. E-mail address: njahabdul@yahoo.com
Copyright c
⃝World Academic Press, World Academic Union
IJNS.2010.06.15/360
2. M. O. Agbaje , et al: Effect of Block Sizes on the Attributes of Watermarking Digital Images 359
The DCT scheme relies on some ideas proposed by Cox et al [19] where they proposed a watermark that consists of
a sequence of randomly generated real numbers. These numbers have a normal distribution with zero mean and unity
variance. Watermarking is carried out by modifying the DCT coefficient ( C𝑖, i = 1...N ) of the image using the DCT
coefficients ( W𝑖, i = 1... N ) of the watermarking to obtain the DCT coefficients ( C’𝑖, i = 1...N ) of the watermarked
image according to the equation
𝐶𝑖′ = 𝐶𝑖 + 𝛼𝐶𝑖𝑊𝑖
𝑖 = 1, 2, . . . , 𝑁, 𝛼 = 0.1
If the original image can be defined by Io and the watermarked image possibly distorted image I*w, then a possible
corrupted watermark w* can be extracted. Reversing the embedding procedure can do this extracting. This is done using
inverse DCT. Suhail et al, [20] did not watermark the whole image. They embedded the watermark in sub-images of the
image. So the N x M image is subdivided into pixels of size 16 x 16 (256 pixels). The DCT of the block is then computed.
After that the DCT coefficients are reordered into zig-zag scan.
Juan R et al [22] analyzed the DCT and applied to a block of 8 x 8 pixels as in the JPEG algorithms. The main goal was
to present a novel analyzed framework that allow for the assessment of the performance of a given watermarking method
in the DCT domain. They were able to discover new detector/decoder structure that outperforms the existing ones, usually
based on calculating the correlation coefficient between the image and the watermark. Chaw-Sang et al [23] performed
factor analysis of wavelet based on watermarked methods.
The aim of this paper is to determine the effect of block sizes (i.e. 2 x 2, 4 x 4, 8 x 8, 16 x 16, 32 x 32 and 64 x 64
) used in watermarking of digital images, on various attributes (robustness, capacity, time of watermarking, visibility and
distortion ) of the watermarked image. The step size or quantization step is a parameter that can be changed to set the final
quality of the image. The greater the step size the greater the compression. A block is an N x N matrix that can represent
either luminance or the corresponding DCT coefficients.
The rest of the work is as follows: section 2 deals with the theory of DCT. Section 3 deals with effects of block sizes
on the attributes of watermarked image. Section 4 states the experimental results and section 5 concludes the paper.
2 Theory of Discrete Cosine Transform (DCT)
Ahmed, Natarajan, and Rao developed discrete cosine transform in 1974 [25]. It is a close relative of discrete fourier
transform (DFT). DCT is a technique for converting a signal into elementary frequency components [26, 27]. Figure 1
shows the DCT transformation function from spatial to frequency domain.
Among the three types of transform domain, the classic and still most popular domain for image processing is that of
the Discrete-Cosine-Transform or DCT. The image is split into 8 x 8 (pixels) block. Each block is used to encode one
signature bit in a pseudorandom manner. Image is thus broken up into various different frequency bands in DCT method,
allowing watermarking data to be embedded easily into the middle frequency bands of the image. The reason why the
middle bands are carefully selected is to avoid the exposure of the important parts of the images to removal through
compression and noise attacks.
The DCT transformation function from Spatial to frequency domain is given by
𝐹 (𝑢, 𝑣) = Λ(𝑢)Λ(𝑣)
4
7
∑
𝑖=0
7
∑
𝑗=0
cos (2𝑖+1)⋅𝑢𝜋
16 cos (2𝑗+1)⋅𝑣𝜋
16 ⋅ 𝑓 (𝑖, 𝑗)
Λ (𝜉) =
{
1 /
√
2, 𝑓𝑜𝑟 𝜉 = 0
1, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
where 𝜉 = u and v. f(i,j) (intensity of pixels in row i and column j) and F(u,v) are the transform coefficients.
2.1 Types of discrete cosine transform (DCT)
2.1.1 The one-dimensional DCT
The most common DCT definition of a 1-Dimensional sequence of length N is the forward transform as:
𝑦(𝑘) =
𝑁
∑
𝑖=1
𝑤(𝑖)𝑥(𝑖) cos
𝜋(2𝑖 − 1)(𝑘 − 1)
2𝑁
, 𝑘 = 1, ⋅ ⋅ ⋅ , 𝑁
IJNS homepage: http://www.nonlinearscience.org.uk/
3. 360 International Journal of NonlinearScience,Vol.9(2010),No.3,pp. 358-366
Figure 1: DCT transformation functions from Spatial to Frequency Domain
where as is shown in figure 1 above is given by
𝑤(𝑖) =
⎧
⎨
⎩
1
√
𝑁
, 𝑖 = 1
√
2
𝑁 , 2 ≤ 𝑖 ≤ 𝑁
The inverse discrete cosine transform reconstructs a sequence from its discrete cosine transform (DCT) coefficients.
𝑥(𝑖) =
𝑁
∑
𝑘=1
𝑤(𝑘)𝑦(𝑘) cos 𝜋 (2𝑖−1)(𝑘−1)
2𝑁 , 𝑖 = 1, ⋅ ⋅ ⋅ , 𝑁 where
𝑤(𝑘) =
⎧
⎨
⎩
1
√
𝑁
, 𝑘 = 1
√
2
𝑁 , 2 ≤ 𝑘 ≤ 𝑁
Thus, the first transform coefficient is the average value of the sample sequence. In literature, this value is referred to as
the DC Coefficient. All other transform coefficients are called the AC Coefficient𝑠.DC and AC are borrowed from Direct
Current and Alternating Current in electricity.
2.1.2 The two-dimensional DCT
The 2-Dimensional DCT is a direct extension of the 1-Dimensional case .It is a separable, linear transformation; that is,
the two-dimensional transform is equivalent to a one-dimensional DCT performed along a single dimension followed by
a one-dimensional DCT in the other dimension. The definition of the two-dimensional DCT for an input image A and
output image B is [20, 21].
𝐵𝑝𝑞 = 𝛼𝑝𝛼𝑞
𝑀−1
∑
𝑚=0
𝑁−1
∑
𝑛=0
𝐴𝑚𝑛 cos 𝜋(2𝑚+1)𝑝
2𝑀 cos 𝜋(2𝑛+1)𝑞
2𝑁
0 ≤ 𝑝 ≤ 𝑀 − 1
0 ≤ 𝑞 ≤ 𝑁 − 1
IJNS email for contribution: editor@nonlinearscience.org.uk
4. M. O. Agbaje , et al: Effect of Block Sizes on the Attributes of Watermarking Digital Images 361
𝛼𝑝 =
⎧
⎨
⎩
1
√
𝑀
, 𝑝 = 0
√
2
𝑀 , 1 ≤ 𝑝 ≤ 𝑀 − 1
𝛼𝑞 =
⎧
⎨
⎩
1
√
𝑁
, 𝑞 = 0
√
2
𝑁 , 1 ≤ 𝑞 ≤ 𝑁 − 1
where 𝑀 and 𝑁 are the row and column size of A, respectively. If you apply the DCT to real data, the result is also real.
The DCT tends to concentrate information, making it useful for image compression applications.
This transform can be inverted using the two-dimensional inverse DCT transform:
𝐴𝑚𝑛 = 𝛼𝑝𝛼𝑞
𝑀−1
∑
𝑝=0
𝑁−1
∑
𝑞=0
𝛼𝑝𝛼𝑞𝐵𝑝𝑞 cos 𝜋(2𝑚+1)𝑝
2𝑀 cos 𝜋(2𝑛+1)𝑞
2𝑁
0 ≤ 𝑚 ≤ 𝑀 − 1
0 ≤ 𝑛 ≤ 𝑁 − 1
𝛼𝑝 =
⎧
⎨
⎩
1
√
𝑀
, 𝑝 = 0
√
2
𝑀 , 1 ≤ 𝑝 ≤ 𝑀 − 1
𝛼𝑞 =
⎧
⎨
⎩
1
√
𝑁
, 𝑞 = 0
√
2
𝑁 , 1 ≤ 𝑞 ≤ 𝑁 − 1
3 Effect of block sizes on attributes of watermarked image
3.1 Watermark attributes
The technique used for watermarking in this paper is Discrete Cosine Transform (DCT). The DCT function transforms
spatial domain data to frequency domain. The image is split into various blocks and the attributes of watermarked image
is compared based on the block sizes used.
The host image and the watermark are combined by the equation.
𝐶′
𝑖𝑗(𝑛) = 𝛼𝑛𝐶𝑖𝑗(𝑛) + 𝛽𝑛𝑊𝑖𝑗(𝑛)
𝑛 = 1, 2 . . . .
𝛼𝑛 and 𝛽𝑛 coefficients are for block n. C𝑖𝑗 (n) are the DCT coefficient of the host image block and W𝑖𝑗(n) the DCT
coefficient of the watermarked image block. 𝛼𝑛 is the scaling factor and 𝛽𝑛 is the embedding factor. 𝛼𝑛=1 and 𝛽𝑛 = 1
2𝑛
where n represents the value of block size and ranges from 1 to 6. The effect of block sizes is investigated from the various
attributes of watermarked image.
3.1.1 Channel capacity.
The channel capacity, which is an upper bound on the rate at which communication reliably occurs, is given by
𝐶 = 𝑊 log2 (1 + 𝑆/𝑁)
or
𝐶 =
1
2
log2 (1 + 𝑆/𝑁) .
Jim Chou et al (2001). W= bandwidth given in units of pixel –1
. Channel capacity has unit of bits per pixel.
3.1.2 Imperceptibility
The signal to noise ratio (SNR) is a rough estimate of perceptibility. The higher is the SNR the more visible the modulated
carriers. Mathematical matrix such as (Mean Square Error (MSE) and visual analysis of watermarked image can also be
used.
IJNS homepage: http://www.nonlinearscience.org.uk/
5. 362 International Journal of NonlinearScience,Vol.9(2010),No.3,pp. 358-366
3.1.3 Robustness
Peak signal-to -noise ratio (PSNR) is to quantify distortion due to watermarking.
𝑃𝑆𝑁𝑅 = 10 log10
(
2562
𝑀𝑆𝐸
)
3.1.4 Distortion
The method used for measuring distortion caused in the watermarked image is the mean square error (MSE). The signal
to noise ratio can also be used for measuring distortion in digital images. The MSE is given by the equation below:
𝑀𝑆𝐸 =
1
𝑀𝑁
𝑀
∑
𝑖=1
𝑁
∑
𝑖=1
((𝑃𝑖𝑗 − 𝑃′
𝑖𝑗))
2
M= number of component colours, N= number of pixels, P𝑖𝑗 = original image , P’𝑖𝑗 = watermarked image. The SNR can
be used as a measure for distortion of images. The larger is the SNR the better the quality of the image.
The SNR is related to MSE by the equation:
𝑆𝑁𝑅 =
10 log
𝑀
∑
𝑖=1
𝑁
∑
𝑗=1
((
𝐼
′
𝑖𝑗
))
𝑀𝑁 ⋅ 𝐸𝑚𝑠
2
𝑒𝑖𝑗 = 𝐼𝑖𝑗 − 𝐼′
𝑖𝑗
𝑀𝑆𝐸 = 𝐸𝑚𝑠 =
1
𝑀𝑁
𝑀
∑
𝑖=1
𝑁
∑
𝑖=1
𝑒𝑥𝑦
2
3.2 Implementation
To achieve the objectives of this work the following will be carried out:
∙ Transform the original image and watermark image into blocks
∙ DCT coefficients for each block of original image are found out.
∙ Insert 𝛼𝑛 and 𝛽𝑛 for combining the images
∙ DCT for the watermark image blocks are found out.
∙ The final watermarked image is modified according the result above by adding the original image to the watermark
image.
∙ Perform distortion test on watermarked image.
∙ Perform robustness test on watermarked image.
∙ Perform capacity test on watermarked image.
∙ Compare the results obtained.
4 Experimental results
4.1 Images used
The images used for the watermarking is the logo of Federal University of Agriculture, Abeokuta, Nigeria, as shown in
Figure 2 and the image “LENA” popularly used for image processing, downloaded from the Internet as shown in Figure
3.
IJNS email for contribution: editor@nonlinearscience.org.uk
6. M. O. Agbaje , et al: Effect of Block Sizes on the Attributes of Watermarking Digital Images 363
Figure 2: UNAAB logo. Figure 3: Lena Image.
A visible watermarked image is shown in Figure 4 for key = 2 while Figure 5 shows an invisible watermarking at key
=16.
Figure 4: Visible watermarking. Figure 5: Invisible watermarking.
4.2 Results and Discussion
The functions of robustness, channel capacity and distortion were coded in MATLAB 7.1. The results from the program
were based on various effects of block sizes on time, capacity, robustness, distortion and imperceptibility of the water-
marked image after subjecting the watermarked image to various tests according to their block sizes. Table 1 summarizes
the results of the above attributes. Looking at Table 1, the DCT time is very high at block size 2 x 2 with 7.31s and
gradually reduces to 5.160 at block size 4 x 4 and 0.616s at block size 64 x 64.
For the attribute of robustness, the percentage signal-to-noise ratio is very low at block size 2 x 2 with 62.063 db and
gradually increases to 65.490(block size 4 x 4), 68.731(block size 8 x 8), 69.996(block size 16 x 16),70.904(block size 32
x32 ) and 70.989 (block size 64 x 64). The attribute of capacity gradually increases as the block size increases as depicted
in Table 1.
Every communication channel suffers from distortion. Therefore, hiding a watermark in an image will introduce some
measures of distortion in the original image. Distortion in this study showed that the values vary significantly with block
sizes together with the step size as illustrated in Table 2. For example, at block size 2 x 2 with steps 2, 4, 8, 16, 32 and
64, distortion values are 0.3629, 4.0646, 1.2178, 13.2961, 39.999, and 132.5639. Figure 6 shows the distortion values of
block sizes (2 x 2, 4 x 4, 8 x 8, 16 x 16, 32 x 32 and 64 x 64) as against step 2. The same fluctuations are noticed for other
distortion values of block sizes as against their step sizes. Moreso, values of distortion in various step sizes as against a
particular block size are also noted as illustrated in table 2 with their fluctuation as shown in figure 6.
IJNS homepage: http://www.nonlinearscience.org.uk/
7. 364 International Journal of NonlinearScience,Vol.9(2010),No.3,pp. 358-366
The block size used for watermarking is independent of the imperceptibility of the watermark. The embedded image
introduced distortions into the original image and does not control the imperceptibility of the watermark. The impercep-
tibility depends on the embedding factor, 𝛽 of the watermark. The perceptibility of watermark reduces as the key used (
2, 4, 8, 16, 32, 64, etc ) which corresponds to block sizes (2 x 2, 4 x 4, 8 x 8, 16 x 16, 32 x 32, 64 x 64, etc ) increases.
For example, the watermark is visible with keys 2, 4, and 8 but not visible with keys 16, 32, and 64 as shown in table 1.
At key 16 which corresponds to block size 16 x 16, the watermarked image still looks like the original image despite the
hidden watermark.
Table 1: The summary of the various attributes values.
Key Block Size Robustness Time Visibility Capacity
2 2x2 62.063 7.318 Visible 31.532
4 4x4 65.490 5.160 Visible 33.245
8 8x8 68.731 1.760 Visible 34.865
16 16x16 69.996 0.880 Not visible 35.455
32 32x32 70.904 0.716 Not visible 35.952
64 64 x 64 70.989 0.616 Not visible 35.966
Table 2: Summary of Block size with Distortion values in different step Sizes.
Block size Distortion
Step=2
Distortion
Step=4
Distortion
Step=8
Distortion
Step=16
Distortion
Step=32
Distortion
Step=64
2 x 2 0.3629 4.0646 1.2178 13.2961 39.9999 132.5639
4 x 4 0.3051 3.3734 1.0026 10.5401 30.5552 79.7364
8 x 8 0.2721 2.1697 0.6867 7.4734 27.6999 72.1493
16 x 16 0.3007 3.1966 0.9945 10.4414 29.1143 72.9128
32 x32 0.3177 3.8375 1.1307 12.0094 33.2575 81.4464
64 x 64 0.3302 4.4939 1.2562 14.3236 39.4926 93.7040
Step=2
DT = 0.3629 0.3051 0.2721 0.3007 0.3177 0.3302
DT= distortion.
5 Conclusion
This work has applied discrete cosine transform (DCT) to achieve visible and invisible watermarking of a digital image
against piracy. The work has also investigated the effect of block size and step size on the attributes of the watermarked
image. It was found that the time for watermarking reduces as block size increases, capacity and robustness increase
with block size. Distortion fluctuated below block size 8 x 8 but gradually increases above the block size. Above 8 x 8
the watermark is invisible and block size 16 x 16 can serve as a means of checking piracy of digital images because the
watermarked image produced resembles the original image despite the fact that it contains a hidden watermark.
References
[1] Allan MB: A review of digital watermarking. Department of Engineering, University of Aberdeen. (2001).
[2] Embry A: Influence of DCT on coding efficiency. University of Stanford, USA, EE368B. (2000).
[3] Janahan C: Digital image watermarking - An overview.Indian Institute of Information Technology, India.(2000).
[4] Chou J: New generation technique for robust and imperceptible audio data hiding.
IJNS email for contribution: editor@nonlinearscience.org.uk
8. M. O. Agbaje , et al: Effect of Block Sizes on the Attributes of Watermarking Digital Images 365
Figure 6: Distortion values of various block sizes with step size 2.
[5] Zhang Z, Zhu Z, Xi L: Novel scheme for watermarking stereo video. International Journal of Nonlinear Science.
3:(2007),74-80.
[6] Kan H: Adaptive visible watermarking for images. Appeared in Proc of ICMCS 99, Florence, Italy. (1999).
[7] Wong P: Protecting digital media content. ACM. 41:(1999),35-43 .
[8] Barni M, Bartolini F, Checcacci: Watermarking of MPEG-4 video objects. IEEE Transactions on Multimedia.
7:(2005),23-32.
[9] Cayre F, Fontaine C, Furon T: Watermarking security: Theory and practice. IEEE Transactions on Signal Processing.
53:(2005),3976-3987.
[10] Mobasseri BG, Berger RJ: A foundation for watermarking in compressed domain. IEEE Signal Processing Letters.
12:(2005),399-402.
[11] Lei CL, Yu PL, Tsa PL, Chan MH: An efficient and anonymous buyer-seller watermarking protocol. IEEE Transac-
tions on Signal Processing. 13:(2004),1618-1626.
[12] Joshua RS: Modulation and information hiding in image: Published in Proceeding of first information hiding work-
shop.Isaac Newton Institute, Cambridge. (1996).
[13] Marnel LM, Boncelet CG, Retter CT: Spread spectrum image steganography. IEEE Transactions on Signal Process-
ing. 6:(1999),1075-1083.
[14] Sun M, Tian L, Yin J: Hopf bifurcation analysis of the energy resource chaotic system. International Journal of
Nonlinear Science. 1:(2006),49-53.
[15] Cai G, Huang J, Tian L, Wang Q: Adaptive control and slow manifold analysis of a new chaotic system. International
Journal of Nonlinear Science. 2:(2006),50-60.
[16] Wang X, Tian L, Yu L: Linear feedback controlling and synchronization of the Chen’s chaotic system. International
Journal of Nonlinear Science. 2:(2006),43-49.
[17] Idowu BA, Vincent UE, Njah AN: Control and synchronization of chaos in nonlinear gyros via backstepping design.
Internatinal Journal of Nonlinear Science. 5:(2008),11-19.
[18] Njah AN, Vincent UE: Synchronization and anti-synchronization of chaos in an extended Bonhöffer-van der Pol
oscillator using active control. Journal of Sound and Vibration. 319:(2009),41-49.
[19] Njah AN, Ojo KS: Synchronization via backstepping nonlinear control of externally excited Φ6
van der Pol and Φ6
Duffing oscillators. Far East Journal of Dynamical Systems. 11:(2009),41-49.
[20] Koch E, Zhao J: Towards robust and hidden image copyright labeling. In Proceeding of IEEE Nonlinear Signal
Processing Workshop. pp (1995), 452-455.
[21] Bors A, Pitas I: Image watermarking using DCT domain constraints. In Conference IEEE Image Processing, Lau-
sanne, Switzerland. (1996) ,231-234.
[22] Cox I, Kilian J, Leighton FT, Shamoon T: Secure spread spectrum watermarking for multimedia. IEEE Transaction
on Image Processing. 6:(1997),1673-1687.
IJNS homepage: http://www.nonlinearscience.org.uk/
9. 366 International Journal of NonlinearScience,Vol.9(2010),No.3,pp. 358-366
[23] Suhail MA: Simulation study of digital watermarking in JPEG 2000 schemes. IEEE Transaction on Watermarking.
(2001).
[24] Kingbury N: 2-dimensional discrete cosine transform. Connexious. (2005).
[25] Juan RH: Statistical analysis of watermarking schemes for copyright protection of image. IEEE. (1999).
[26] Seng C: Perfomance factor analysis of wavelet based watermarking methods. IEEE. (2005).
[27] Natarajan NT, Rao KR: On image processing and a discrete cosine transform. IEEE Transaction on Computer C-23
(1974).
[28] Tao B, Dikson B: Adaptive watermarking in DCT domain. International Conference on Acoustics Speech and Signal
Processing. ICASSP (1997).
[29] Saraju PM: A discrete cosine transform domain visible watermarking for images. Department of Computer Sciences
and Engineering, University of South Florida, F.I 33620, USA (2000).
[30] Seyd A: The discrete cosine transform theory and application.Department of Electrical and Computer Engineering,
Michigan State University. (2003).
IJNS email for contribution: editor@nonlinearscience.org.uk