Digital media, applications, copyright defense, and multimedia security have become a vital aspect of our daily life. Digital watermarking is a technology used for the copyright security of digital applications. In this work we have dealt with a process able to mark digital pictures with a visible and semi invisible hided information, called watermark. This process may be the basis of a complete copyright protection system. Watermarking is implemented here using Haar Wavelet Coefficients and Principal Component analysis. Experimental results show high imperceptibility where there is no noticeable difference between the watermarked video frames and the original frame in case of invisible watermarking, vice-versa for semi visible implementation. The watermark is embedded in lower frequency band of Wavelet Transformed cover image. The combination of the two transform algorithm has been found to improve performance of the watermark algorithm. The robustness of the watermarking scheme is analyzed by means of two distinct performance measures viz. Peak Signal to Noise Ratio (PSNR) and Normalized Coefficient (NC).
IRJET- Finding Dominant Color in the Artistic Painting using Data Mining ...IRJET Journal
This document discusses finding the dominant color in an artistic painting using data mining techniques. It proposes using k-means clustering via the OpenCV library in Python to cluster pixels in the image by color and determine the dominant color cluster. The document provides background on k-means clustering and other clustering algorithms. It then describes applying a faster k-means algorithm to the image pixels to efficiently identify the dominant color in 2-3 times fewer iterations than standard k-means. The proposed system architecture involves preprocessing the image, extracting pixel vectors, clustering the pixels into color groups using fast k-means, and identifying the dominant color cluster.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
This document proposes an adaptive steganography technique for hiding secret data in digital images. The technique uses variable bit length embedding based on wavelet coefficients of the cover image. A logistic map is used to generate a secret key, which determines the RGB color planes and blocks used for data hiding. Wavelet coefficients are classified into ranges, and the number of bits hidden corresponds to the coefficient value range. Extraction involves selecting the same coefficients based on the key and calculating the hidden bits. The technique aims to improve security, capacity and imperceptibility compared to existing constant bit length methods. Evaluation shows the proposed method provides good security since variable bits are hidden randomly using the secret key.
Experimental study of Data clustering using k- Means and modified algorithmsIJDKP
The k- Means clustering algorithm is an old algorithm that has been intensely researched owing to its ease
and simplicity of implementation. Clustering algorithm has a broad attraction and usefulness in
exploratory data analysis. This paper presents results of the experimental study of different approaches to
k- Means clustering, thereby comparing results on different datasets using Original k-Means and other
modified algorithms implemented using MATLAB R2009b. The results are calculated on some performance
measures such as no. of iterations, no. of points misclassified, accuracy, Silhouette validity index and
execution time
Density Based Clustering Approach for Solving the Software Component Restruct...IRJET Journal
This document presents research on using the DBSCAN clustering algorithm to solve the problem of software component restructuring. It begins with an abstract that introduces DBSCAN and describes how it can group related software components. It then provides background on software component clustering and describes DBSCAN in more detail. The methodology section outlines the 4 phases of the proposed approach: data collection and processing, clustering with DBSCAN, visualization and analysis, and final restructuring. Experimental results show that DBSCAN produces more evenly distributed clusters compared to fuzzy clustering. The conclusion is that DBSCAN is a better technique for software restructuring as it can identify clusters of varying shapes and sizes without specifying the number of clusters in advance.
This document discusses using Fourier transforms to measure image similarity. It proposes a metric that uses both the real and complex components of the Fourier transform to compute a similarity ranking between two images. The metric calculates the intersection of the covariance matrix of the Fourier transform magnitude and phase spectra of the two images. The approach is shown to be advantageous for images with varying degrees of lighting. It summarizes existing methods for image comparison and feature extraction, and discusses implementing the proposed similarity metric using OpenCV. Sample results are given comparing the new Fourier-based method to existing histogram comparison techniques.
IRJET- Wavelet Transform based SteganographyIRJET Journal
This document proposes an image steganography technique to hide audio signals in images using wavelet transforms. It discusses how discrete wavelet transform (DWT) can be used to decompose images and audio into different frequency subbands. The technique encrypts an audio file (MP3 or WAV) and hides it in the wavelet coefficients of an image. When extracted, the secret audio signal is decrypted. The quality of the stego image and extracted audio is measured using metrics like PSNR, SSIM, SNR, and SPCC. The results show good quality for the steganography technique and that it can withstand various attacks.
In recent machine learning community, there is a trend of constructing a linear logarithm version of
nonlinear version through the ‘kernel method’ for example kernel principal component analysis, kernel
fisher discriminant analysis, support Vector Machines (SVMs), and the current kernel clustering
algorithms. Typically, in unsupervised methods of clustering algorithms utilizing kernel method, a
nonlinear mapping is operated initially in order to map the data into a much higher space feature, and then
clustering is executed. A hitch of these kernel clustering algorithms is that the clustering prototype resides
in increased features specs of dimensions and therefore lack intuitive and clear descriptions without
utilizing added approximation of projection from the specs to the data as executed in the literature
presented. This paper aims to utilize the ‘kernel method’, a novel clustering algorithm, founded on the
conventional fuzzy clustering algorithm (FCM) is anticipated and known as kernel fuzzy c-means algorithm
(KFCM). This method embraces a novel kernel-induced metric in the space of data in order to interchange
the novel Euclidean matric norm in cluster prototype and fuzzy clustering algorithm still reside in the space
of data so that the results of clustering could be interpreted and reformulated in the spaces which are
original. This property is used for clustering incomplete data. Execution on supposed data illustrate that
KFCM has improved performance of clustering and stout as compare to other transformations of FCM for
clustering incomplete data.
IRJET- Finding Dominant Color in the Artistic Painting using Data Mining ...IRJET Journal
This document discusses finding the dominant color in an artistic painting using data mining techniques. It proposes using k-means clustering via the OpenCV library in Python to cluster pixels in the image by color and determine the dominant color cluster. The document provides background on k-means clustering and other clustering algorithms. It then describes applying a faster k-means algorithm to the image pixels to efficiently identify the dominant color in 2-3 times fewer iterations than standard k-means. The proposed system architecture involves preprocessing the image, extracting pixel vectors, clustering the pixels into color groups using fast k-means, and identifying the dominant color cluster.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
This document proposes an adaptive steganography technique for hiding secret data in digital images. The technique uses variable bit length embedding based on wavelet coefficients of the cover image. A logistic map is used to generate a secret key, which determines the RGB color planes and blocks used for data hiding. Wavelet coefficients are classified into ranges, and the number of bits hidden corresponds to the coefficient value range. Extraction involves selecting the same coefficients based on the key and calculating the hidden bits. The technique aims to improve security, capacity and imperceptibility compared to existing constant bit length methods. Evaluation shows the proposed method provides good security since variable bits are hidden randomly using the secret key.
Experimental study of Data clustering using k- Means and modified algorithmsIJDKP
The k- Means clustering algorithm is an old algorithm that has been intensely researched owing to its ease
and simplicity of implementation. Clustering algorithm has a broad attraction and usefulness in
exploratory data analysis. This paper presents results of the experimental study of different approaches to
k- Means clustering, thereby comparing results on different datasets using Original k-Means and other
modified algorithms implemented using MATLAB R2009b. The results are calculated on some performance
measures such as no. of iterations, no. of points misclassified, accuracy, Silhouette validity index and
execution time
Density Based Clustering Approach for Solving the Software Component Restruct...IRJET Journal
This document presents research on using the DBSCAN clustering algorithm to solve the problem of software component restructuring. It begins with an abstract that introduces DBSCAN and describes how it can group related software components. It then provides background on software component clustering and describes DBSCAN in more detail. The methodology section outlines the 4 phases of the proposed approach: data collection and processing, clustering with DBSCAN, visualization and analysis, and final restructuring. Experimental results show that DBSCAN produces more evenly distributed clusters compared to fuzzy clustering. The conclusion is that DBSCAN is a better technique for software restructuring as it can identify clusters of varying shapes and sizes without specifying the number of clusters in advance.
This document discusses using Fourier transforms to measure image similarity. It proposes a metric that uses both the real and complex components of the Fourier transform to compute a similarity ranking between two images. The metric calculates the intersection of the covariance matrix of the Fourier transform magnitude and phase spectra of the two images. The approach is shown to be advantageous for images with varying degrees of lighting. It summarizes existing methods for image comparison and feature extraction, and discusses implementing the proposed similarity metric using OpenCV. Sample results are given comparing the new Fourier-based method to existing histogram comparison techniques.
IRJET- Wavelet Transform based SteganographyIRJET Journal
This document proposes an image steganography technique to hide audio signals in images using wavelet transforms. It discusses how discrete wavelet transform (DWT) can be used to decompose images and audio into different frequency subbands. The technique encrypts an audio file (MP3 or WAV) and hides it in the wavelet coefficients of an image. When extracted, the secret audio signal is decrypted. The quality of the stego image and extracted audio is measured using metrics like PSNR, SSIM, SNR, and SPCC. The results show good quality for the steganography technique and that it can withstand various attacks.
In recent machine learning community, there is a trend of constructing a linear logarithm version of
nonlinear version through the ‘kernel method’ for example kernel principal component analysis, kernel
fisher discriminant analysis, support Vector Machines (SVMs), and the current kernel clustering
algorithms. Typically, in unsupervised methods of clustering algorithms utilizing kernel method, a
nonlinear mapping is operated initially in order to map the data into a much higher space feature, and then
clustering is executed. A hitch of these kernel clustering algorithms is that the clustering prototype resides
in increased features specs of dimensions and therefore lack intuitive and clear descriptions without
utilizing added approximation of projection from the specs to the data as executed in the literature
presented. This paper aims to utilize the ‘kernel method’, a novel clustering algorithm, founded on the
conventional fuzzy clustering algorithm (FCM) is anticipated and known as kernel fuzzy c-means algorithm
(KFCM). This method embraces a novel kernel-induced metric in the space of data in order to interchange
the novel Euclidean matric norm in cluster prototype and fuzzy clustering algorithm still reside in the space
of data so that the results of clustering could be interpreted and reformulated in the spaces which are
original. This property is used for clustering incomplete data. Execution on supposed data illustrate that
KFCM has improved performance of clustering and stout as compare to other transformations of FCM for
clustering incomplete data.
Parallel KNN for Big Data using Adaptive IndexingIRJET Journal
This document presents an evaluation of different algorithms for performing parallel k-nearest neighbor (kNN) queries on big data using the MapReduce framework. It first discusses how kNN algorithms do not scale well for large datasets. It then reviews existing MapReduce-based kNN algorithms like H-BNLJ, H-zkNNJ, and RankReduce that improve performance by partitioning data and distributing computation. The document also proposes using an adaptive indexing technique with the RankReduce algorithm. An implementation of this approach on a airline on-time statistics dataset shows it achieves better precision and speed than other algorithms.
A COMPARATIVE STUDY ON DISTANCE MEASURING APPROACHES FOR CLUSTERINGIJORCS
Clustering plays a vital role in the various areas of research like Data Mining, Image Retrieval, Bio-computing and many a lot. Distance measure plays an important role in clustering data points. Choosing the right distance measure for a given dataset is a biggest challenge. In this paper, we study various distance measures and their effect on different clustering. This paper surveys existing distance measures for clustering and present a comparison between them based on application domain, efficiency, benefits and drawbacks. This comparison helps the researchers to take quick decision about which distance measure to use for clustering. We conclude this work by identifying trends and challenges of research and development towards clustering.
Analysis of mass based and density based clustering techniques on numerical d...Alexander Decker
This document compares and analyzes mass-based and density-based clustering techniques. It summarizes DBSCAN and OPTICS, two popular density-based clustering algorithms, and introduces DEMassDBSCAN, a mass-based clustering algorithm. The document tests the algorithms on several datasets and finds that DEMassDBSCAN has better runtime than DBSCAN, especially on larger datasets, while producing fewer unassigned clusters.
Kernel based similarity estimation and real time tracking of movingIAEME Publication
This document discusses kernel-based mean shift algorithm for real-time object tracking. It presents the following:
1) The algorithm uses kernel density estimation to calculate the similarity between a target model and candidate windows, using the Bhattacharyya coefficient. 2) It can successfully track objects moving uniformly at slow speeds but struggles with fast or non-uniform motion, or changes in scale. 3) The algorithm was tested on video streams and could track objects moving slowly but failed for fast or irregular motion. Adaptive target windows are needed to handle changes in scale.
MULTI-OBJECTIVE ENERGY EFFICIENT OPTIMIZATION ALGORITHM FOR COVERAGE CONTROL ...ijcseit
Many studies have been done in the area of Wireless Sensor Networks (WSNs) in recent years. In this kind of networks, some of the key objectives that need to be satisfied are area coverage, number of active sensors and energy consumed by nodes. In this paper, we propose a NSGA-II based multi-objective algorithm for optimizing all of these objectives simultaneously. The efficiency of our algorithm is demonstrated in the simulation results. This efficiency can be shown as finding the optimal balance point among the maximum coverage rate, the least energy consumption, and the minimum number of active nodes while maintaining the connectivity of the network
International Journal of Engineering Research and DevelopmentIJERD Editor
Electrical, Electronics and Computer Engineering,
Information Engineering and Technology,
Mechanical, Industrial and Manufacturing Engineering,
Automation and Mechatronics Engineering,
Material and Chemical Engineering,
Civil and Architecture Engineering,
Biotechnology and Bio Engineering,
Environmental Engineering,
Petroleum and Mining Engineering,
Marine and Agriculture engineering,
Aerospace Engineering.
A NEW ALGORITHM FOR DATA HIDING USING OPAP AND MULTIPLE KEYSEditor IJMTER
Steganography gained significance in the past few years due to the increasing need
for providing secrecy in an open environment like the internet. With almost anyone can
observe the communicated data all around, steganography attempts to hide the very existence
of the message and make communication undetectable. In this paper we propose a modern
technique with Integer Wavelet transform (IWT) and double key to accomplish high hiding
capability, high security and good visual quality. Here cover image is transformed in to
wavelet transform co-efficients and the coefficients are selected randomly by using Key-2 for
embedding the data. Key-1 is used to calculate the number of bits to be embedded in the
randomly selected coefficients. Finally the Optimum Pixel Adjustment Process(OPAP) is
applied to the stego image to reduce the data embedding error.
This document discusses parallelizing graph algorithms on GPUs for optimization. It summarizes previous work on parallel Breadth-First Search (BFS), All Pair Shortest Path (APSP), and Traveling Salesman Problem (TSP) algorithms. It then proposes implementing BFS, APSP, and TSP on GPUs using optimization techniques like reducing data transfers between CPU and GPU and modifying the algorithms to maximize GPU computing power and memory usage. The paper claims this will improve performance and speedup over CPU implementations. It focuses on optimizing graph algorithms for parallel GPU processing to accelerate applications involving large graph analysis and optimization problems.
Image Fusion using PCA Based Fusion Rule in Wavelet Domainijtsrd
Image fusion deals with combination of two or more images at input to produce new fused output image. Image fusion is a branch of image processing which is developing rapidly. The main aim of image fusion is to provide maximum information in the resulting image produced from the fusion of two or more images of the same scene or different taken at different instant of time. The result of image fusion is an image with more information and better quality. PCA provides dimensionality reduction and feature extraction. DWT decomposes the image by a factor of two. LWT is a second generation wavelet. Deepak Gambhir "Image Fusion using PCA Based Fusion Rule in Wavelet Domain" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-6 , October 2020, URL: https://www.ijtsrd.com/papers/ijtsrd33367.pdf Paper Url: https://www.ijtsrd.com/computer-science/computer-graphics/33367/image-fusion-using-pca-based-fusion-rule-in-wavelet-domain/deepak-gambhir
Segmentation - based Historical Handwritten Word Spotting using document-spec...Konstantinos Zagoris
Many word spotting strategies for the modern documents are not directly applicable to historical handwritten documents due to writing styles variety and intense degradation. In this paper, a new method that permits effective word spotting in handwritten documents is presented that relies upon document-specific local features which take into account texture information around representative keypoints. Experimental work on two historical handwritten datasets using standard evaluation measures shows the improved performance achieved by the proposed methodology.
This document summarizes a research paper that proposes a new density-based clustering technique called Triangle-Density Based Clustering Technique (TDCT) to efficiently cluster large spatial datasets. TDCT uses a polygon approach where the number of data points inside each triangle of a polygon is calculated to determine triangle densities. Triangle densities are used to identify clusters based on a density confidence threshold. The technique aims to identify clusters of arbitrary shapes and densities while minimizing computational costs. Experimental results demonstrate the technique's superiority in terms of cluster quality and complexity compared to other density-based clustering algorithms.
Feature Subset Selection for High Dimensional Data using Clustering TechniquesIRJET Journal
The document discusses feature subset selection for high dimensional data using clustering techniques. It proposes a FAST algorithm that has three steps: (1) removing irrelevant features, (2) dividing features into clusters, (3) selecting the most representative feature from each cluster. The FAST algorithm uses DBSCAN, a density-based clustering algorithm, to cluster the features. DBSCAN can identify clusters of arbitrary shape and detect noise, making it suitable for high dimensional data. The goal of feature subset selection is to find a small number of discriminative features that best represent the data.
The objective of this paper is to present the hybrid approach for edge detection. Under this technique, edge
detection is performed in two phase. In first phase, Canny Algorithm is applied for image smoothing and in
second phase neural network is to detecting actual edges. Neural network is a wonderful tool for edge
detection. As it is a non-linear network with built-in thresholding capability. Neural Network can be trained
with back propagation technique using few training patterns but the most important and difficult part is to
identify the correct and proper training set.
Welcome to International Journal of Engineering Research and Development (IJERD)IJERD Editor
call for paper 2012, hard copy of journal, research paper publishing, where to publish research paper,
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
This document describes MMRetrieval.net, a multimodal search engine that allows retrieval of information from multiple modalities including text and images. It discusses challenges in multimodal retrieval like defining modality, fusing scores across modalities, and improving efficiency through two-stage retrieval. MMRetrieval.net was developed by researchers at DUTH to participate in the ImageCLEF 2010 Wikipedia retrieval task, where it achieved the second best performance overall by combining text and visual features through various fusion techniques. The system demonstrates the promise of multimodal search but also identifies open challenges around modality definitions, fusion methods, and scaling to large databases.
Scalable Rough C-Means clustering using Firefly algorithm..................................................................1
Abhilash Namdev and B.K. Tripathy
Significance of Embedded Systems to IoT................................................................................................. 15
P. R. S. M. Lakshmi, P. Lakshmi Narayanamma and K. Santhi Sri
Cognitive Abilities, Information Literacy Knowledge and Retrieval Skills of Undergraduates: A
Comparison of Public and Private Universities in Nigeria ........................................................................ 24
Janet O. Adekannbi and Testimony Morenike Oluwayinka
Risk Assessment in Constructing Horseshoe Vault Tunnels using Fuzzy Technique................................ 48
Erfan Shafaghat and Mostafa Yousefi Rad
Evaluating the Adoption of Deductive Database Technology in Augmenting Criminal Intelligence in
Zimbabwe: Case of Zimbabwe Republic Police......................................................................................... 68
Mahlangu Gilbert, Furusa Samuel Simbarashe, Chikonye Musafare and Mugoniwa Beauty
Analysis of Petrol Pumps Reachability in Anand District of Gujarat ....................................................... 77
Nidhi Arora
The document proposes a Modified Pure Radix Sort algorithm for large heterogeneous datasets. The algorithm divides the data into numeric and string processes that work simultaneously. The numeric process further divides data into sublists by element length and sorts them simultaneously using an even/odd logic across digits. The string process identifies common patterns to convert strings to numbers that are then sorted. This optimizes problems with traditional radix sort through a distributed computing approach.
This document summarizes a research paper on developing an improved LEACH (Low-Energy Adaptive Clustering Hierarchy) communication protocol for energy efficient data mining in multi-feature sensor networks. It begins with background on wireless sensor networks and issues like energy efficiency. It then discusses the existing LEACH protocol and its drawbacks. The proposed improved LEACH protocol includes cluster heads, sub-cluster heads, and cluster nodes to address LEACH's limitations. This new version aims to minimize energy consumption during cluster formation and data aggregation in multi-feature sensor networks.
This document compares the k-means and grid density clustering algorithms. K-means partitions data into k clusters based on minimizing distances between points and cluster centroids. It works well with numerical data but can be affected by outliers. Grid density determines dense grids based on neighbor densities and can handle different shaped and multi-density clusters without knowing the number of clusters beforehand. It has advantages over k-means in that it can handle categorical data, noise and arbitrary shaped clusters.
Stochastic Computing Correlation Utilization in Convolutional Neural Network ...TELKOMNIKA JOURNAL
In recent years, many applications have been implemented in embedded systems and mobile Internet of Things (IoT) devices that typically have constrained resources, smaller power budget, and exhibit "smartness" or intelligence. To implement computation-intensive and resource-hungry Convolutional Neural Network (CNN) in this class of devices, many research groups have developed specialized parallel accelerators using Graphical Processing Units (GPU), Field-Programmable Gate Arrays (FPGA), or Application-Specific Integrated Circuits (ASIC). An alternative computing paradigm called Stochastic Computing (SC) can implement CNN with low hardware footprint and power consumption. To enable building more efficient SC CNN, this work incorporates the CNN basic functions in SC that exploit correlation, share Random Number Generators (RNG), and is more robust to rounding error. Experimental results show our proposed solution provides significant savings in hardware footprint and increased accuracy for the SC CNN basic functions circuits compared to previous work.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Intensify Denoisy Image Using Adaptive Multiscale Product ThresholdingIJERA Editor
This Paper presents a wavelet-based multiscale products thresholding scheme for noise suppression of magnetic resonance images. This paper proposed a method based on image de-noising and edge enhancement of noisy multidimensional imaging data sets. Medical images are generally suffered from signal dependent noises i.e. speckle noise and broken edges. Most of the noises signals appear from machine and environment generally not contribute to the tissue differentiation. But, the noise generated due to above mentioned reason causes a grainy appearance on the image, hence image enhancement is required. For the intent of image denoising, Adaptive Multiscale Product Thresholding based on 2-D wavelet transform is used. In this method, contiguous wavelet sub bands are multiplied to improve edge structure while reducing noise. In multiscale products, boundaries can be successfully distinguished from noise. Adaptive threshold is designed and forced on multiscale products as an alternative of wavelet coefficients or recognize important features. For the edge enhancement. Canny Edge Detection Algorithm is used with scale multiplication technique. Simulation results shows that the planned technique better suppress the Poisson noise among several noises i.e. salt & pepper, speckle noise and random noise. The Performance of Image Intesification can be estimate by means of PSNR, MSE.
Parallel KNN for Big Data using Adaptive IndexingIRJET Journal
This document presents an evaluation of different algorithms for performing parallel k-nearest neighbor (kNN) queries on big data using the MapReduce framework. It first discusses how kNN algorithms do not scale well for large datasets. It then reviews existing MapReduce-based kNN algorithms like H-BNLJ, H-zkNNJ, and RankReduce that improve performance by partitioning data and distributing computation. The document also proposes using an adaptive indexing technique with the RankReduce algorithm. An implementation of this approach on a airline on-time statistics dataset shows it achieves better precision and speed than other algorithms.
A COMPARATIVE STUDY ON DISTANCE MEASURING APPROACHES FOR CLUSTERINGIJORCS
Clustering plays a vital role in the various areas of research like Data Mining, Image Retrieval, Bio-computing and many a lot. Distance measure plays an important role in clustering data points. Choosing the right distance measure for a given dataset is a biggest challenge. In this paper, we study various distance measures and their effect on different clustering. This paper surveys existing distance measures for clustering and present a comparison between them based on application domain, efficiency, benefits and drawbacks. This comparison helps the researchers to take quick decision about which distance measure to use for clustering. We conclude this work by identifying trends and challenges of research and development towards clustering.
Analysis of mass based and density based clustering techniques on numerical d...Alexander Decker
This document compares and analyzes mass-based and density-based clustering techniques. It summarizes DBSCAN and OPTICS, two popular density-based clustering algorithms, and introduces DEMassDBSCAN, a mass-based clustering algorithm. The document tests the algorithms on several datasets and finds that DEMassDBSCAN has better runtime than DBSCAN, especially on larger datasets, while producing fewer unassigned clusters.
Kernel based similarity estimation and real time tracking of movingIAEME Publication
This document discusses kernel-based mean shift algorithm for real-time object tracking. It presents the following:
1) The algorithm uses kernel density estimation to calculate the similarity between a target model and candidate windows, using the Bhattacharyya coefficient. 2) It can successfully track objects moving uniformly at slow speeds but struggles with fast or non-uniform motion, or changes in scale. 3) The algorithm was tested on video streams and could track objects moving slowly but failed for fast or irregular motion. Adaptive target windows are needed to handle changes in scale.
MULTI-OBJECTIVE ENERGY EFFICIENT OPTIMIZATION ALGORITHM FOR COVERAGE CONTROL ...ijcseit
Many studies have been done in the area of Wireless Sensor Networks (WSNs) in recent years. In this kind of networks, some of the key objectives that need to be satisfied are area coverage, number of active sensors and energy consumed by nodes. In this paper, we propose a NSGA-II based multi-objective algorithm for optimizing all of these objectives simultaneously. The efficiency of our algorithm is demonstrated in the simulation results. This efficiency can be shown as finding the optimal balance point among the maximum coverage rate, the least energy consumption, and the minimum number of active nodes while maintaining the connectivity of the network
International Journal of Engineering Research and DevelopmentIJERD Editor
Electrical, Electronics and Computer Engineering,
Information Engineering and Technology,
Mechanical, Industrial and Manufacturing Engineering,
Automation and Mechatronics Engineering,
Material and Chemical Engineering,
Civil and Architecture Engineering,
Biotechnology and Bio Engineering,
Environmental Engineering,
Petroleum and Mining Engineering,
Marine and Agriculture engineering,
Aerospace Engineering.
A NEW ALGORITHM FOR DATA HIDING USING OPAP AND MULTIPLE KEYSEditor IJMTER
Steganography gained significance in the past few years due to the increasing need
for providing secrecy in an open environment like the internet. With almost anyone can
observe the communicated data all around, steganography attempts to hide the very existence
of the message and make communication undetectable. In this paper we propose a modern
technique with Integer Wavelet transform (IWT) and double key to accomplish high hiding
capability, high security and good visual quality. Here cover image is transformed in to
wavelet transform co-efficients and the coefficients are selected randomly by using Key-2 for
embedding the data. Key-1 is used to calculate the number of bits to be embedded in the
randomly selected coefficients. Finally the Optimum Pixel Adjustment Process(OPAP) is
applied to the stego image to reduce the data embedding error.
This document discusses parallelizing graph algorithms on GPUs for optimization. It summarizes previous work on parallel Breadth-First Search (BFS), All Pair Shortest Path (APSP), and Traveling Salesman Problem (TSP) algorithms. It then proposes implementing BFS, APSP, and TSP on GPUs using optimization techniques like reducing data transfers between CPU and GPU and modifying the algorithms to maximize GPU computing power and memory usage. The paper claims this will improve performance and speedup over CPU implementations. It focuses on optimizing graph algorithms for parallel GPU processing to accelerate applications involving large graph analysis and optimization problems.
Image Fusion using PCA Based Fusion Rule in Wavelet Domainijtsrd
Image fusion deals with combination of two or more images at input to produce new fused output image. Image fusion is a branch of image processing which is developing rapidly. The main aim of image fusion is to provide maximum information in the resulting image produced from the fusion of two or more images of the same scene or different taken at different instant of time. The result of image fusion is an image with more information and better quality. PCA provides dimensionality reduction and feature extraction. DWT decomposes the image by a factor of two. LWT is a second generation wavelet. Deepak Gambhir "Image Fusion using PCA Based Fusion Rule in Wavelet Domain" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-6 , October 2020, URL: https://www.ijtsrd.com/papers/ijtsrd33367.pdf Paper Url: https://www.ijtsrd.com/computer-science/computer-graphics/33367/image-fusion-using-pca-based-fusion-rule-in-wavelet-domain/deepak-gambhir
Segmentation - based Historical Handwritten Word Spotting using document-spec...Konstantinos Zagoris
Many word spotting strategies for the modern documents are not directly applicable to historical handwritten documents due to writing styles variety and intense degradation. In this paper, a new method that permits effective word spotting in handwritten documents is presented that relies upon document-specific local features which take into account texture information around representative keypoints. Experimental work on two historical handwritten datasets using standard evaluation measures shows the improved performance achieved by the proposed methodology.
This document summarizes a research paper that proposes a new density-based clustering technique called Triangle-Density Based Clustering Technique (TDCT) to efficiently cluster large spatial datasets. TDCT uses a polygon approach where the number of data points inside each triangle of a polygon is calculated to determine triangle densities. Triangle densities are used to identify clusters based on a density confidence threshold. The technique aims to identify clusters of arbitrary shapes and densities while minimizing computational costs. Experimental results demonstrate the technique's superiority in terms of cluster quality and complexity compared to other density-based clustering algorithms.
Feature Subset Selection for High Dimensional Data using Clustering TechniquesIRJET Journal
The document discusses feature subset selection for high dimensional data using clustering techniques. It proposes a FAST algorithm that has three steps: (1) removing irrelevant features, (2) dividing features into clusters, (3) selecting the most representative feature from each cluster. The FAST algorithm uses DBSCAN, a density-based clustering algorithm, to cluster the features. DBSCAN can identify clusters of arbitrary shape and detect noise, making it suitable for high dimensional data. The goal of feature subset selection is to find a small number of discriminative features that best represent the data.
The objective of this paper is to present the hybrid approach for edge detection. Under this technique, edge
detection is performed in two phase. In first phase, Canny Algorithm is applied for image smoothing and in
second phase neural network is to detecting actual edges. Neural network is a wonderful tool for edge
detection. As it is a non-linear network with built-in thresholding capability. Neural Network can be trained
with back propagation technique using few training patterns but the most important and difficult part is to
identify the correct and proper training set.
Welcome to International Journal of Engineering Research and Development (IJERD)IJERD Editor
call for paper 2012, hard copy of journal, research paper publishing, where to publish research paper,
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
This document describes MMRetrieval.net, a multimodal search engine that allows retrieval of information from multiple modalities including text and images. It discusses challenges in multimodal retrieval like defining modality, fusing scores across modalities, and improving efficiency through two-stage retrieval. MMRetrieval.net was developed by researchers at DUTH to participate in the ImageCLEF 2010 Wikipedia retrieval task, where it achieved the second best performance overall by combining text and visual features through various fusion techniques. The system demonstrates the promise of multimodal search but also identifies open challenges around modality definitions, fusion methods, and scaling to large databases.
Scalable Rough C-Means clustering using Firefly algorithm..................................................................1
Abhilash Namdev and B.K. Tripathy
Significance of Embedded Systems to IoT................................................................................................. 15
P. R. S. M. Lakshmi, P. Lakshmi Narayanamma and K. Santhi Sri
Cognitive Abilities, Information Literacy Knowledge and Retrieval Skills of Undergraduates: A
Comparison of Public and Private Universities in Nigeria ........................................................................ 24
Janet O. Adekannbi and Testimony Morenike Oluwayinka
Risk Assessment in Constructing Horseshoe Vault Tunnels using Fuzzy Technique................................ 48
Erfan Shafaghat and Mostafa Yousefi Rad
Evaluating the Adoption of Deductive Database Technology in Augmenting Criminal Intelligence in
Zimbabwe: Case of Zimbabwe Republic Police......................................................................................... 68
Mahlangu Gilbert, Furusa Samuel Simbarashe, Chikonye Musafare and Mugoniwa Beauty
Analysis of Petrol Pumps Reachability in Anand District of Gujarat ....................................................... 77
Nidhi Arora
The document proposes a Modified Pure Radix Sort algorithm for large heterogeneous datasets. The algorithm divides the data into numeric and string processes that work simultaneously. The numeric process further divides data into sublists by element length and sorts them simultaneously using an even/odd logic across digits. The string process identifies common patterns to convert strings to numbers that are then sorted. This optimizes problems with traditional radix sort through a distributed computing approach.
This document summarizes a research paper on developing an improved LEACH (Low-Energy Adaptive Clustering Hierarchy) communication protocol for energy efficient data mining in multi-feature sensor networks. It begins with background on wireless sensor networks and issues like energy efficiency. It then discusses the existing LEACH protocol and its drawbacks. The proposed improved LEACH protocol includes cluster heads, sub-cluster heads, and cluster nodes to address LEACH's limitations. This new version aims to minimize energy consumption during cluster formation and data aggregation in multi-feature sensor networks.
This document compares the k-means and grid density clustering algorithms. K-means partitions data into k clusters based on minimizing distances between points and cluster centroids. It works well with numerical data but can be affected by outliers. Grid density determines dense grids based on neighbor densities and can handle different shaped and multi-density clusters without knowing the number of clusters beforehand. It has advantages over k-means in that it can handle categorical data, noise and arbitrary shaped clusters.
Stochastic Computing Correlation Utilization in Convolutional Neural Network ...TELKOMNIKA JOURNAL
In recent years, many applications have been implemented in embedded systems and mobile Internet of Things (IoT) devices that typically have constrained resources, smaller power budget, and exhibit "smartness" or intelligence. To implement computation-intensive and resource-hungry Convolutional Neural Network (CNN) in this class of devices, many research groups have developed specialized parallel accelerators using Graphical Processing Units (GPU), Field-Programmable Gate Arrays (FPGA), or Application-Specific Integrated Circuits (ASIC). An alternative computing paradigm called Stochastic Computing (SC) can implement CNN with low hardware footprint and power consumption. To enable building more efficient SC CNN, this work incorporates the CNN basic functions in SC that exploit correlation, share Random Number Generators (RNG), and is more robust to rounding error. Experimental results show our proposed solution provides significant savings in hardware footprint and increased accuracy for the SC CNN basic functions circuits compared to previous work.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Intensify Denoisy Image Using Adaptive Multiscale Product ThresholdingIJERA Editor
This Paper presents a wavelet-based multiscale products thresholding scheme for noise suppression of magnetic resonance images. This paper proposed a method based on image de-noising and edge enhancement of noisy multidimensional imaging data sets. Medical images are generally suffered from signal dependent noises i.e. speckle noise and broken edges. Most of the noises signals appear from machine and environment generally not contribute to the tissue differentiation. But, the noise generated due to above mentioned reason causes a grainy appearance on the image, hence image enhancement is required. For the intent of image denoising, Adaptive Multiscale Product Thresholding based on 2-D wavelet transform is used. In this method, contiguous wavelet sub bands are multiplied to improve edge structure while reducing noise. In multiscale products, boundaries can be successfully distinguished from noise. Adaptive threshold is designed and forced on multiscale products as an alternative of wavelet coefficients or recognize important features. For the edge enhancement. Canny Edge Detection Algorithm is used with scale multiplication technique. Simulation results shows that the planned technique better suppress the Poisson noise among several noises i.e. salt & pepper, speckle noise and random noise. The Performance of Image Intesification can be estimate by means of PSNR, MSE.
Mathematical Model for Hemodynamicand Hormonal Effects of Human Ghrelin in He...IJERA Editor
Hemodynamicand hormonal effects of human ghrelin in healthy volunteers.To investigate hemodynamic and
hormonaleffects of ghrelin, a novel growth hormone (GH)-releasing peptide, we gave six healthy men an
intravenousbolus of human ghrelin or placebo and vice versa1–2 wk apart in a randomized fashion. Ghrelin
elicited amarked increase in circulating GH . The elevation ofGH lasted longer than 60 min after the bolus
injection.Injection of ghrelin significantly decreased mean arterialpressure without a significant changein heart
rate .In summary, human ghrelin elicited a potent, longlastingGH release and had beneficial hemodynamic
effectsvia reducing cardiac afterload and increasing cardiac outputwithout an increase in heart rate. Thus, the
purpose of thisstudy was to investigate hemodynamic and hormonaleffects of intravenous ghrelin in healthy
volunteers. This paper discussed the constant stress level of healthy volunteers with times to damage of stress
effect andrecoveries
Maintenance cost reduction of a hydraulic excavator through oil analysisIJERA Editor
The purpose of this article is to present the economic advantages that the Oil Analysis can offer to companies
operating with hydraulic excavators. The financial advantages are the result of lower maintenance costs and
increased productivity of the equipment. Real situations of an infrastructure construction company in which there
were mechanical failures that could have been avoided if implemented with efficiency analysis of lubricants.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Improved SCTP Scheme To Overcome Congestion Losses Over ManetIJERA Editor
Transmission control conventions have been utilized for data transmission process. TCP has been pre-possessed
for information transmission over wired correspondence having diverse transfer speeds and message delays over
the system. TCP gives correspondence utilizing 3-handshake which sends RTS and ACK originate from server
end and information message has been transmitted over the data transmission gave. This does not give security
over flooding assault happened on the system. TCP gives correspondence between distinctive hubs of the wired
correspondence however when multi-spilling happens in a system TCP does not gives legitimate throughput of
the framework which is significant issue that happened in the past framework. In the proposed work, to beat this
issue SCTP and Improved SCTP transmission control convention has been executed for the framework
execution of the framework. SCTP gives 4-handshake correspondence in the message transmit and improved
SCTP gives the performance when the queue length comes to its full value then it divides the message to other
nodes because of which security element get expansions and this likewise gives correspondence administrations
over multi-spilling and multi-homing. Numerous sender and recipients can impart over wired system utilizing
different methodologies of correspondence through same routers, which debases in the TCP convention. In last
we assess parameters for execution assessment. Here, we composed and actualized our proving ground utilizing
Network Simulator (NS-2.35) to test the execution of both Routing conventions.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Fluid Structure Interaction Based Investigation of Convergent-Divergent NozzleIJERA Editor
This document discusses fluid-structure interaction (FSI) analysis of a convergent-divergent nozzle. It begins by introducing FSI and describing methods for analyzing FSI problems numerically. It then discusses the aerodynamic design of a convergent-divergent nozzle and using computational fluid dynamics (CFD) to simulate flow at different nozzle pressure ratios (NPR) and obtain structural loads. These loads are then used in structural analysis to determine the optimal nozzle thickness. CFD simulations were run for NPR values from 0.257 to 1.89, with the 1.51 NPR value producing maximum stress. Structural analysis with a factor of safety of 3 determined that a 12.5mm thickness nozzle is
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Review of Jobran Khalil Jobran's works from Romantic FeaturesIJERA Editor
Jobran Khalil Jobran great immigrant writer and poet have been known with different view of his contemporary
to romantic poetry text format and reflective language of his time. This great poet had a wonderful impression
in new Arab literature and this caused the creation of a special style which at that time address the entity and
existence by mysticism and kind of intuition retrieved from its receiver view in solution of social ,emotional
and religious situation, as he shined with his myth and other great works. The aim of this article is that the
romantic feature of his works and elegance lyrics was investigated from different view of critics, and that
influence is the presence of woman, especially his mother in his romantic phenomena. This is a propellant and
different factor that we cannot neglect it according to the strategic important impact has had on his careers and
creation of a his world style. Social and environmental conditions changed the route of his career. These
conditions affected him and inevitably reflected them in his poem. If conditions are suitable, the feeling
manifested and appears and the talent shows off in the best way.
Cooling Of Power Converters by Natural ConvectionIJERA Editor
This document summarizes a study on cooling power converters on trains through natural convection. It discusses a numerical analysis of the effect of discrete heat flux distributions on a vertical aluminum plate representing the train wall. The plate contains 12 resistors in 3 rows and 4 columns representing electronic components. Simulations were run to determine the influence of heat flux distribution and channel dimensions on maximum temperature. Experimental work was also conducted to validate the numerical model. In conclusion, the heat flux distribution significantly impacts heat transfer, with distributed fluxes allowing better cooling than uniform fluxes.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Development of Abrasive Selection Model/Chart for Palm Frond Broom Peeling Ma...IJERA Editor
A model for predicting the friction required by a palm frond broom peeling machine for effective peeling of palm leaf to broom bristle and a chart for selecting the best abrasive material for this machine’s peeling operation were developed in this study using mechanistic modeling method. The model quantified the relationship between the coefficient of friction and other operational parameters of this machine while the abrasives selection chart constitutes a plot of this measured friction parameter against the abrasive materials used in palm frond broom peeling machine fabrication. The values of the coefficient of friction of palm leaf on different abrasive materials used in this plot were determined from experimental study of the effect of moisture content level of naturally withered palm leaves (uninfluenced by external forces) on their coefficient of friction with the abrasives. Results revealed the average moisture content of palm leaf this machine can peel effectively as 6.96% and also that the roughest among the abrasives that approximate the coefficient of friction for a specific design of this peeling machine gives maximum peeling efficiency. Thus, the roughest among the abrasive materials that approximate the coefficient of friction for a specific design of this machine should be selected and used for its fabrication and operation.
Non-life claims reserves using Dirichlet random environmentIJERA Editor
The purpose of this paper is to propose a stochastic extension of the Chain-Ladder model in a Dirichlet random
environment to calculate the provesions for disaster payement. We study Dirichlet processes centered around the
distribution of continuous-time stochastic processes such as a Brownian motion or a continuous time Markov
chain. We then consider the problem of parameter estimation for a Markov-switched geometric Brownian
motion (GBM) model. We assume that the prior distribution of the unobserved Markov chain driving by the
drift and volatility parameters of the GBM is a Dirichlet process. We propose an estimation method based on
Gibbs sampling.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
This document discusses and compares five predictive data mining techniques: principal component analysis, correlation coefficient analysis, principal component regression, nonlinear partial least squares, and linear regression. It first provides background on data acquisition, preparation, and preprocessing techniques. It then describes each predictive technique, including how they handle issues like collinearity in datasets. Finally, it discusses how these techniques will be applied to four different datasets and the results compared to determine which technique best predicts the response variable while reducing variables.
This document discusses an enhanced technique for secure and reliable watermarking using Modified Haar Wavelet Transform (MFHWT). The proposed technique embeds a watermark into an original image using discrete wavelet transform (DWT) and wavelet packet transform (WPT) according to the size of the watermark. MFHWT is a memory efficient, fast, and simple transform. The watermarking process involves embedding and extraction processes. Various watermarking techniques in different transform domains are discussed, including DWT and WPT. The proposed algorithm uses MFHWT for decomposition and reconstruction. Image quality is measured using metrics like MSE and PSNR, with higher PSNR indicating better quality. The technique achieves robustness
This document discusses an enhanced technique for secure and reliable watermarking using Modified Haar Wavelet Transform (MFHWT). The proposed technique embeds a watermark into an original image using discrete wavelet transform (DWT) and wavelet packet transform (WPT) according to the size of the watermark. MFHWT is a memory efficient, fast, and simple transform. The watermarking process involves embedding and extraction processes. Various watermarking techniques in different transform domains are discussed, including DWT and WPT. The proposed algorithm uses MFHWT for decomposition and reconstruction. Image quality is measured using metrics like MSE and PSNR, with higher PSNR indicating better quality. The technique achieves robustness
This document describes a steganographic method based on integer wavelet transform and genetic algorithm. The proposed method embeds secret messages into the integer wavelet transform coefficients of images. A genetic algorithm is used to generate an optimal mapping function for embedding bits into coefficients in 8x8 blocks. After embedding, an optimal pixel adjustment process is applied to minimize differences between the original and embedded images. Experimental results on Lena and Baboon images show the proposed method achieves higher data hiding capacity and PSNR values than previous related work.
A Novel Algorithm on Wavelet Based Robust Invisible Digital Image Watermarkin...IJERA Editor
This paper presents a new algorithm on waveletbased robust and invisible digital image watermarking for multimedia security. The proposed algorithm has been designed, implemented and verified using MATLAB R2014a simulation for both embedding and extraction of the watermark and the results of which shows significant improvement in performance metrics like PSNR, SSIM, Mean Correlation, MSE than the other existing algorithms in the current literature. The cover image considered here in our algorithm is of the size (256x256) and the binary watermark image size is taken as (16x16).
This document proposes using the discrete wavelet transform (DWT) with truncation for privacy-preserving data mining. The DWT decomposes data into approximation and detail coefficients. Truncating the detail coefficients distorts the data while maintaining statistical properties. An experiment shows the method effectively conceals sensitive information while preserving data mining performance after distortion.
here it introduces an efficient multi-resolution watermarking methodology for copyright protection of digital images. By adapting the watermark signal to the wavelet coefficients, the proposed method is highly image adaptive and the watermark signal can be strengthen in the most significant parts of the image. As this property also increases the watermark visibility, usage of the human visual system is incorporated to prevent perceptual visibility of embedded watermark signal. Experimental results show that the proposed system preserves the image quality and is vulnerable against most common image processing distortions. Furthermore, the hierarchical nature of wavelet transform allows for detection of watermark at various resolutions, resulting in reduction of the computational load needed for watermark detection based on the noise level. The performance of the proposed system is shown to be superior to that of other available schemes reported in the literature.
Image Steganography Using Wavelet Transform And Genetic AlgorithmAM Publications
This paper presents the application of Wavelet Transform and Genetic Algorithm in a novel
steganography scheme. We employ a genetic algorithm based mapping function to embed data in Discrete Wavelet
Transform coefficients in 4x4 blocks on the cover image. The optimal pixel adjustment process is applied after
embedding the message. We utilize the frequency domain to improve the robustness of steganography and, we
implement Genetic Algorithm and Optimal Pixel Adjustment Process to obtain an optimal mapping function to
reduce the difference error between the cover and the stego-image, therefore improving the hiding capacity with
low distortions. Our Simulation results reveal that the novel scheme outperforms adaptive steganography technique
based on wavelet transform in terms of peak signal to noise ratio and capacity, 39.94 dB and 50% respectively.
Fuzzy Type Image Fusion Using SPIHT Image Compression TechniqueIJERA Editor
This paper presents a fuzzy type image fusion technique using Set Partitioning in Hierarchical Trees (SPIHT).
It is concluded that fusion with higher single levels provides better fusion quality. This technique can be used
for fusion of fuzzy images as well as multi model image fusion. The proposed algorithm is very simple, easy to
implement and could be used for real time applications. This is paper also provided comparatively studied
between proposed and previous existing technique and validation of the proposed algorithm as Peak Signal to
Noise Ratio (PSNR), Root Mean Square Error (RMSE).
International Journal of Engineering and Science Invention (IJESI)inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
IRJET- Wavelet Transform along with SPIHT Algorithm used for Image Compre...IRJET Journal
This document presents a method for medical image compression using lifting-based wavelet transform coupled with the SPIHT (Set Partition in Hierarchical Trees) algorithm. The lifting-based wavelet transform is used to decompose images into wavelet coefficients, addressing drawbacks of conventional wavelet transforms. SPIHT is then used to encode the wavelet coefficients. Experimental results showed the proposed method provides better PSNR, quality factor, and MSSIM values for medical images at low bit rates, compared to traditional compression methods. The compressed images can be represented in formats like GIF, JPG, BMP and PNG.
IRJET- An Overview of Hiding Information in H.264/Avc Compressed VideoIRJET Journal
This document provides an overview of hiding information in H.264/AVC compressed video. It discusses different information hiding techniques such as bit plane replacement, spread spectrum, histogram manipulation and matrix encryption. It identifies locations within the H.264 video compression process where information can be hidden, such as during prediction, transformation, quantization and entropy coding. It reviews related information hiding strategies for each location and compares strategies based on payload, overhead, video quality and complexity. The document aims to provide a better understanding of information hiding in compressed video and identify new opportunities.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
This document summarizes an article that proposes a new image steganography technique using discrete wavelet transform. The technique applies an adaptive pixel pair matching method from the spatial domain to the frequency domain. Data is embedded in the middle frequencies of the discrete wavelet transformed image because they are more robust to attacks than high frequencies. The coefficients in the low frequency sub-band are preserved unchanged to improve image quality. The experimental results showed better performance with discrete wavelet transform compared to the spatial domain.
A New Wavelet based Digital Watermarking Method for Authenticated Mobile SignalsCSCJournals
The mobile network security is becoming more important as the number of data being exchanged on the internet increases. The growing possibilities of modern mobile computing environment are potentially more vulnerable to attacks. As a result, confidentiality and data integrity becomes one of the most important problems facing Mobile IP (MIP). To address these issues, the present paper proposes a new Wavelet based watermarking scheme that hides the mobile signals and messages in the transmission. The proposed method uses the successive even and odd values of a neighborhood to insert the authenticated signals or digital watermark (DW). That is the digital watermark information is not inserted in the adjacent column and row position of a neighborhood. The proposed method resolves the ambiguity between successive even odd gray values using LSB method. This makes the present method as more simple but difficult to break, which is an essential parameter for any mobile signals and messages. To test the efficacy of the proposed DW method, various statistical measures are evaluated, which indicates high robustness, imperceptibility, un-ambiguity, confidentiality and integrity of the present method.
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Near Reversible Data Hiding Scheme for images using DCTIJERA Editor
This document presents a near-reversible data hiding scheme for images using discrete cosine transform (DCT). In the proposed scheme, data is embedded in the non-zero AC coefficients of DCT blocks in a way that minimizes modifications to the original coefficients, improving visual quality. During embedding, two mathematical functions are used to modify coefficients by amounts closer to their original values compared to other methods. Experimental results on test images show the proposed scheme achieves better visual quality than existing schemes while maintaining data hiding capacity and reversibility.
A systematic image compression in the combination of linear vector quantisati...eSAT Publishing House
1) The document presents a method for image compression that combines linear vector quantization and discrete wavelet transform.
2) Linear vector quantization is used to generate codebooks and encode image blocks, achieving better PSNR and MSE than self-organizing maps.
3) The encoded blocks are then subjected to discrete wavelet transform. Low-low subbands are stored for reconstruction while other subbands are discarded.
4) Experimental results show the proposed method achieves higher PSNR and lower MSE than existing techniques, preserving both texture and edge information.
IRJET- Object Detection using Hausdorff DistanceIRJET Journal
This document proposes a new object recognition system using Hausdorff distance. The system aims to improve on existing methods like YOLO that struggle with small objects and can capture garbage data. The document outlines preprocessing steps like noise cancellation, representing shapes as point sets, and extracting features. It then describes using Hausdorff distance and shape context to find the best match between input and reference shapes. Testing on datasets showed encouraging results for recognizing handwritten digits.
Similar to An Efficient Frame Embedding Using Haar Wavelet Coefficients And Orthogonal Component Analysis (20)
Low power architecture of logic gates using adiabatic techniquesnooriasukmaningtyas
The growing significance of portable systems to limit power consumption in ultra-large-scale-integration chips of very high density, has recently led to rapid and inventive progresses in low-power design. The most effective technique is adiabatic logic circuit design in energy-efficient hardware. This paper presents two adiabatic approaches for the design of low power circuits, modified positive feedback adiabatic logic (modified PFAL) and the other is direct current diode based positive feedback adiabatic logic (DC-DB PFAL). Logic gates are the preliminary components in any digital circuit design. By improving the performance of basic gates, one can improvise the whole system performance. In this paper proposed circuit design of the low power architecture of OR/NOR, AND/NAND, and XOR/XNOR gates are presented using the said approaches and their results are analyzed for powerdissipation, delay, power-delay-product and rise time and compared with the other adiabatic techniques along with the conventional complementary metal oxide semiconductor (CMOS) designs reported in the literature. It has been found that the designs with DC-DB PFAL technique outperform with the percentage improvement of 65% for NOR gate and 7% for NAND gate and 34% for XNOR gate over the modified PFAL techniques at 10 MHz respectively.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsVictor Morales
K8sGPT is a tool that analyzes and diagnoses Kubernetes clusters. This presentation was used to share the requirements and dependencies to deploy K8sGPT in a local environment.
Understanding Inductive Bias in Machine LearningSUTEJAS
This presentation explores the concept of inductive bias in machine learning. It explains how algorithms come with built-in assumptions and preferences that guide the learning process. You'll learn about the different types of inductive bias and how they can impact the performance and generalizability of machine learning models.
The presentation also covers the positive and negative aspects of inductive bias, along with strategies for mitigating potential drawbacks. We'll explore examples of how bias manifests in algorithms like neural networks and decision trees.
By understanding inductive bias, you can gain valuable insights into how machine learning models work and make informed decisions when building and deploying them.
Introduction- e - waste – definition - sources of e-waste– hazardous substances in e-waste - effects of e-waste on environment and human health- need for e-waste management– e-waste handling rules - waste minimization techniques for managing e-waste – recycling of e-waste - disposal treatment methods of e- waste – mechanism of extraction of precious metal from leaching solution-global Scenario of E-waste – E-waste in India- case studies.
ACEP Magazine edition 4th launched on 05.06.2024Rahul
This document provides information about the third edition of the magazine "Sthapatya" published by the Association of Civil Engineers (Practicing) Aurangabad. It includes messages from current and past presidents of ACEP, memories and photos from past ACEP events, information on life time achievement awards given by ACEP, and a technical article on concrete maintenance, repairs and strengthening. The document highlights activities of ACEP and provides a technical educational article for members.
An Efficient Frame Embedding Using Haar Wavelet Coefficients And Orthogonal Component Analysis
1. Charchika Saraswat et al Int. Journal of Engineering Research and Applications www.ijera.com
ISSN : 2248-9622, Vol. 4, Issue 7( Version 4), July 2014, pp.24-28
www.ijera.com 24 | P a g e
An Efficient Frame Embedding Using Haar Wavelet Coefficients And Orthogonal Component Analysis Charchika Saraswat1, Diwakar Gautam2, Anupam Agarwal3 1Department of Electronics & Communication, Jagannath University, Jaipur, Rajasthan, India 2Department of Computer Science and Engineering, MNIT, Jaipur, Rajasthan, India 3Department of Electronics & Communication, Jagannath University, Jaipur, Rajasthan, India Abstract Digital media, applications, copyright defense, and multimedia security have become a vital aspect of our daily life. Digital watermarking is a technology used for the copyright security of digital applications. In this work we have dealt with a process able to mark digital pictures with a visible and semi invisible hided information, called watermark. This process may be the basis of a complete copyright protection system. Watermarking is implemented here using Haar Wavelet Coefficients and Principal Component analysis. Experimental results show high imperceptibility where there is no noticeable difference between the watermarked video frames and the original frame in case of invisible watermarking, vice-versa for semi visible implementation. The watermark is embedded in lower frequency band of Wavelet Transformed cover image. The combination of the two transform algorithm has been found to improve performance of the watermark algorithm. The robustness of the watermarking scheme is analyzed by means of two distinct performance measures viz. Peak Signal to Noise Ratio (PSNR) and Normalized Coefficient (NC).
Keywords: Digital Watermark, Principal Component Analysis, Discrete (Haar) Wavelet Transform, PSNR, NormalizedCoefficient.
I. Introduction:
Continuous efforts are being made to devise an efficient watermarking schema, but techniques proposed so far do not seem to be robust to all possible attacks and multimedia data processing operations. Watermarking founds a sudden increase in interest, most likely due to the increase in concern over Intellectual Property Right (IPR). Watermarking techniques may be relevant in various application areas including Copyright protection, Copy protection, Temper detection, Fingerprinting etc. [1- 3]. Based on their domain embedding, watermarking schemes can be classified either as Spatial Domain (the watermarking system directly alters the main data elements, like pixels in an image, to hide the watermark data) or Transformed Domain (the watermarking system alters the frequency transforms of data elements to hide the watermark data). The latter has proved to be more robust than the spatial domain watermarking [1], [4]. To transfer an image to its frequency representation, one can use several reversible conversion like Discrete Cosine Transform (DCT), Discrete Wavelet Transform (DWT), or Discrete Fourier Transform (DFT)[1]. As though spatial domain based techniques cannot sustain most of the common attacks like compression, high pass or low pass based filtering, etc.[1], [4].
Since, financial implications of some of the application areas like fingerprinting and copyright protection are very high and till now no successful
algorithm seem to be available to prevent illegal copying of the multimedia content, the primary goal of this research work targeted to develop a watermarking scheme for video frames (which are stored in spatial domain as well as transformed domain) which can sustain the known attacks and various image manipulation operations.
II. Haar Wavelet Transform:
DWT is the discrete variant of the wavelet transform. Wavelet transform represents valid alternative to the cosine transform used in standard JPEG. The DWT of images is a transform based on the tree structure with n levels that can be implemented by using an appropriate bank of filters. Essentially it is possible to follow two strategies that differ from each other basically because of the criterion used to extract strings of image samples to be elaborated by the bank of filters. Most image watermarking schemes operate either in the Discrete Cosine Transform (DCT) or the Discrete Wavelet Transform (DWT) domain. A few watermarking algorithms employ more exotic transforms such as the Fourier-Mellin Transform and the fractal transform [6].
The DWT is computed by successive low pass and high pass filtering of the discrete time-domain
RESEARCH ARTICLE OPEN ACCESS
2. Charchika Saraswat et al Int. Journal of Engineering Research and Applications www.ijera.com
ISSN : 2248-9622, Vol. 4, Issue 7( Version 4), July 2014, pp.24-28
www.ijera.com 25 | P a g e
signal. This is called the Mallet algorithm or Mallet-tree
decomposition. Its significance is in the manner
it connects the continuous-time muti-resolution to
discrete-time filter.
Let a signal be denoted by x[n], where n is an
integer sequence. G0 and H0 represents the low pass
filter and high pass filter transfer functions
respectively. At each level, high-pass filters detail
information, d[n], is associated with low-pass filter,
while scaling function produces coarse approximate,
a[n]. Thus the time-frequency plane is resolved. The
filtering and decimation process is continued until
the desired level is reached. Depending on the
maximum number of levels the length signal. DWT
of the original signal is then obtained by
concatenating all the coefficients, a[n] and d[n],
starting from the last level of decomposition.
2.1 Haar kernel function:
The family of N Haar functions ( ) k h t are defined
on the interval 0t 1. The shape of the Haar
function of an index k, is determined by two
parameters: p and q, where
2 1 p k q
and k is in a range of k 0,1,2,3.......N 1.
When 0
1
h (t)
N
, the Haar function is defined as a
constant when k 0, the Haar function is defined as
III. Principal Component Analysis:
Principal component analysis (PCA) is a
mathematical procedure that uses an orthogonal
transformation to convert a set of observations of
possibly correlated variables into a set of values
of uncorrelated variables called the major
components. It reduces the number of principal
components less than or equal to the number of the
original variables. The first major component of
change has the variability in the data as much the
larger as possible, so hit the highest possible
variance under the constraint that each (i.e.
uncorrelated) components are orthogonal to
preceding components. The key ingredient is
combined data set normally distributed, therefore
guaranteed to be independent.
PCA is sensitive to relative scaling of the
original variables. PCA was invented by Karl
Pearson in 1901 [9] as an analogue of the principal
axes theorem in mechanics. It was later
independently developed (and named) by Harold
Hotelling in 1930s [10]. The method is mostly used
as a tool in exploratory data analysis and for
making predictive models. PCA can be achieved
by eigenvalue decomposition of a data covariance
(or correlation) matrix or singular value
decomposition of a data matrix, usually after mean
centering (and general or using Z scores) the data
matrix for each attribute [10].
Results of a PCA are usually discussed in terms
of factor score, sometimes called the component
scores [11]. PCA is the simplest of the true
eigenvector based multivariate analysis. Often, our
actions in a way to explain the variance in the data
revealing the internal structure of the data. A
multivariate high dimensional dataset data space (1
axis per variable) is assumed as a set of coordinates,
in some sense. PCA is closely related to factor
analysis.
The main advantage of PCA is that once these
patterns in the data have been identified, the data
can be compressed by reducing number of
dimensions, without much loss of information.
Plots the data into a new coordinate system where
the data with maximum covariance are plotted
together and is known as the first principal
component.
Similarly, there are the second and third
principal component and so on. Maximum
p owe r concentration lies in the first major
component.
The below algorithm represents the algorithm utilized
for PCA:
3.1 Algorithm for PCA:
Let a data set be represented by matrix Q with p
rows and m columns where p represents the number
of data while m represents the number of dimensions
to represent a particular data.
Step 1: Compute the mean μi and standard deviation
σi of each row Di where i varies from 1 to p.
Step 3: Compute Zi, according to the following
equation
Zi= (Di-μi)/σi (1)
Here Zi represents a centered, scaled version of Di, of
the same size as that of Di.
Step 4: Principal Component Analysis on Zi (size p x
m) to obtain the principal component coefficient
matrix coeffi (size m x m).
Step 5: Calculate vector Score(i) as
Score(i)= Zi × coeffi (2)
where Score(i) represents the principal component
scores of the i
th
sub-block.
IV. Proposed Method:
In this work we propose an invisible and semi
visible watermarking (according to dynamic range
of hiding parameter, alpha) using the methodology
depicted by the following steps:-
4.1 Steps:
4.1.1) Embedding the Watermark in cover image:
3. Charchika Saraswat et al Int. Journal of Engineering Research and Applications www.ijera.com
ISSN : 2248-9622, Vol. 4, Issue 7( Version 4), July 2014, pp.24-28
www.ijera.com 26 | P a g e
Step 1: Convert each frame from RGB to YUV colour
format.
Step 2: Apply 2-level Haar transform to the
Luminance (Y) component and obtain 4 sub-bands
LL, LH, HL and HH of sizes N/4 x N/4.
Step 3: Divide the LL sub-bands into k non-overlapping
sub-blocks each of dimension n x n of
the same size as that of message.
Step 4: Embedding of watermark is established with
strength α into each sub-block firstly by obtaining the
principal component scores by PCA Algorithm (3.1).
The embedding is carried out using equation below.
score(i)= Score(i) + αW (3)
Where, score(i) represents the principal component
matrix of the i
th
sub-block.
Step 5: Apply inverse PCA on the modified PCA
components of sub-blocks of the LL sub-bands to
obtain the modified wavelet coefficients.
Step 6: Apply 2-level inverse DWT to obtain the
watermarked luminance component of frame. Then
convert video frame back into its RGB components.
4.1.2) Watermark Extraction from Watermarked
Frame:
The below steps should be carried out parallel on
original and watermarked images to obtain Score(i)
and score(i).
Step 1: Convert each frame from RGB to YUV colour
format.
Step 2: Choose the luminance (Y) component of the
frame and apply 2-level Haar transform to obtain 4
sub-bands LL, LH, HL and HH of sizes N/4 x N/4.
Step 3: Divide the LL sub-band into n × n non-overlapping
sub-blocks.
Step 4: Pass each block of chosen LL band to PCA
Algorithm (3.1).
Step 5: From the LL sub-band, the watermark gray
values are extracted from the principal components
of each sub-block by using below equation:
W(i)= (score(i) – Score(i)) / α (4)
where W(i) is the watermark extracted from the ith
sub block.
4.2 Verification of the Result:
The MSE (Mean Square Error) and NC
(Normalized Coefficients) values are calculated for
the watermarking procedure. And the criterion of
good watermarking technique is lower should be the
MSE value and higher should be the NC value.
MSE represent the similarity index of original
image in comparison to watermarked image while
MSE represent the index that shows the deterioration
or damage of extracted watermark when compared
to original watermark which was used for hiding in
the previous stage. Resemblance of watermarked
image or attacked frame to cover (original image).
Better is resemblance, better is watermarking scheme.
4.2.1 PSNR: The Peak-Signal-To-Noise Ratio (PSNR)
is used to evaluate deviation of the watermarked and
the attacked watermarked frames from the original
frame and is defined as:
PSNR = 10 Log10 (255
2
/ MSE), measured in
dB (decibels) units.
Where, MSE (mean squared error) between original
and distorted frames (size m x n) is defined. MSE is
sum of squared difference between original and
watermarked frame.
4.2.2 The Normalized Coefficient (NC): It gives a
measure of the robustness of watermarking. NC can
be ranged between 0 to1:
NC=
2 2
( )* '( )
( )* ' ( )*
W i W i
W i W i
W and W’ represent the original and extracted
watermark respectively.
V. Experimental Results:
The proposed algorithm is implemented on a
Core-i5 processor, running on a Windows7 platform
(64bit). The experiment was performed for a given
set of cover images and watermark images. The
„baboon.tif‟, „lena.tif‟, „pirate.tif‟, and
„cameraman.tif‟ etc. images are considered for
experimental work.
The watermark images are taken as gray values
images, while the cover information is rendered as
colored ones as depicted by Fig.(1& 2) .
Fig.1 Cover Image Set
Fig. 2 Watermark Images set
The cover image at its initial phase are
transformed to cylindrical color space (YUV). The
luminance component is further applied for its DWT
transform, from which the LL (approximation band)
is extracted.
The principal coefficient are evaluated for non-overlapping
distinct block set of LL band. The
4. Charchika Saraswat et al Int. Journal of Engineering Research and Applications www.ijera.com
ISSN : 2248-9622, Vol. 4, Issue 7( Version 4), July 2014, pp.24-28
www.ijera.com 27 | P a g e
embedding is carried out by Algo.4.1.1 as represented by Fig.(3):- Fig. 3 Watermarked Images The embedded watermark or message image can be extracted by using Algo. 4.1.2 applied parallel on watermarked and cover images respectively. The extraction process is carried out on Fig.(3) results and extracted watermark is represented by Fig.(4). The performance of the proposed watermarking algorithm is evaluated using prominent performance measurement methods of PSNR and NC coefficients. Fig. 4 Extracted Watermarks or Message The following plot represent the PSNR and NC plot for two cover images. The plot shows that via this proposed methodology we have succeed to achieve the targeted value of high PSNR and NC value nearer to 1. (a) (b) Fig. 5(a), (b) Plots for PSNR & NC
VI. Future Scope & Conclusion:
In this article we proposed an efficient watermarking algorithm for invisible and semi visible watermarking algorithm, depending on the value of alpha under the consideration. Higher value give semi visible resemblance, but if alpha continuously goes on increasing may degrade the frame quality, while for lower value of alpha the system generates invisible water mark frames. As a future scope the concept of Cryptography and Digital Watermarking can be combined to implement more secure Digital Watermarking system. We can use the watermarking technique in the frequency domain of various applications watermark, as the image watermark. We can also implement in other spatial domain techniques and cryptography algorithms for most advanced encryption technique to encrypt the messages. As a future work the video frames can be subject to scene change analysis to embed an independent watermark in the sequence of frames formation of a scene, and repeat this process for all scenes within a video. REFERENCES
[1] Saraju P. Mohanty "Digital Watermarking: A Tutorial Review", URL: http:// www. csee. usf. edu/~ smohanty/ research /Reports /WMSurvey1999 Mohanty.pdf & http:// citeseer .ist. psu.edu / mohanty99 digital.htm.
[2] W. Bender, D. Gruhl, N. Morimoto, and A. Lu. "Techniques for data hiding". IBM Systems Journal, Vol. 35.(3/4), 1996, pp. 313-336.
[3] M. Arnold, M. Schmucker, and S.D. Wolthusen, “Techniques and application of Digital Watermarking and Content Protection”, Eds.Northwood ,Artech House, 2003.
5. Charchika Saraswat et al Int. Journal of Engineering Research and Applications www.ijera.com
ISSN : 2248-9622, Vol. 4, Issue 7( Version 4), July 2014, pp.24-28
www.ijera.com 28 | P a g e
[4] I.J. Cox, J.Kilian, T.Leighton and T. Shamoon, “Secure Spread Spectrum watermarking for Multimedia,” IEEE Tras. on Image Processing , Vol. 6,No12, 1997, pp. 1673-1687.
[5] Potdar, Vidysagar and Han, Song and Chang, Elizabeth, “A survey of digital image watermarking techniques”, Proceeding of 3rd IEEE-International Conference on Industrial Informatics, Frontier Technologies for the Future of Industry and Business, pp. 709-716, Perth, WA, Aug 10, 2005.
[6] F. Bossen M. Kutter , F. Jordan, “Digital signature of color images using amplihlde modulation,” in Proc. of SPlE storage and retrieval for image and video databases, San lose, USA, vol. 3022-5, February 1997, pp. 518-526.
[7] Hsu C. T. and Wu, J. L. Hidden Digital Watermarks in Images. IEEE Trans. on Image Processing, vol. 8, no. 1, pp. 55-68. 1999.
[8] Selesnick, I.W.; Baraniuk, R.G.; Kingsbury, N.C. - 2005 - The dual-tree complex wavelet transform.
[9] Hotelling, H. (1933). Analysis of a complex of statistical variables into principal components. Journal of Educational Psychology, 24, 417-441, and 498-520
[10] Hotelling, H. (1936). Relations between two sets of variates. Biometrika, 27 ,321 Abdi. H., & Williams, L.J. (2010). "Principal component analysis” Wiley Interdisciplinary Reviews: Computational Statistics, 2: 433– 459. doi:10.1002/wics.101.
[11] N. Terzija, “Digital Image Watermarking in The Wavelet Domain”, Technical Report, Faculty of Engineering Sciences, Gerhard- Mercator-Universität Duisburg, December2002, http://www.fb9dv.uni- duisburg.de/members /ter/trep_1202.pdf