This document summarizes an extended matrix encoding algorithm for steganography proposed in a research paper. The algorithm aims to improve the embedding efficiency and rate of the classic F5 steganography system. It does this by extending the hash function used in matrix encoding to multiple layers, allowing more secret bits to be embedded into each carrier cell while still only modifying one bit. The encoding is represented by a quad (dmax, n, k, L) where L indicates the maximum extension layer. Secret bits are tested against specific extended codes up to layer L, and if they match, additional bits can be embedded into the carrier cell. Experimental results showed the extended algorithm performs better than the classic F5 system.
This document proposes an adaptive steganography technique for hiding secret data in digital images. The technique uses variable bit length embedding based on wavelet coefficients of the cover image. A logistic map is used to generate a secret key, which determines the RGB color planes and blocks used for data hiding. Wavelet coefficients are classified into ranges, and the number of bits hidden corresponds to the coefficient value range. Extraction involves selecting the same coefficients based on the key and calculating the hidden bits. The technique aims to improve security, capacity and imperceptibility compared to existing constant bit length methods. Evaluation shows the proposed method provides good security since variable bits are hidden randomly using the secret key.
AN ADAPTIVE PSEUDORANDOM STEGO-CRYPTO TECHNIQUE FOR DATA COMMUNICATIONIJCNCJournal
The document describes a proposed adaptive pseudorandom stego-crypto technique for data communication. The technique combines stream cipher cryptography with a modified pseudorandom LSB substitution technique. This provides an evenly distributed cipher text while also enhancing security through increased brute force search times and reduced time complexity by avoiding collisions during random pixel selection. The proposed method uses three parameters that are optimized through experimental analysis to minimize distortions, increase cipher text scattering, and reduce collisions and time complexity. Results demonstrate the technique maintains good perceptual quality while improving upon previous methods.
A novel secure image steganography method based on chaos theory in spatial do...ijsptm
This paper presents a novel approach of building a secure data hiding technique in digital images. The
image steganography technique takes the advantage of limited power of human visual system (HVS). It uses
image as cover media for embedding secret message. The most important requirement for a steganographic
algorithm is to be imperceptible while maximizing the size of the payload. In this paper a method is
proposed to encrypt the secret bits of the message based on chaos theory before embedding into the cover
image. A 3-3-2 LSB insertion method has been used for image steganography. Experimental results show a
substantial improvement in the Peak Signal to Noise Ratio (PSNR) and Image Fidelity (IF) value of the
proposed technique over the base technique of 3-3-2 LSB insertion.
This document summarizes a research paper on a Probabilistic Data Encryption Scheme (PDES). The paper presents a probabilistic encryption scheme that combines the security of Goldwasser and Micali's probabilistic encryption with the efficiency of deterministic schemes. The scheme is based on the assumption that solving the quadratic residuacity problem is computationally infeasible without knowing the factorization of the composite integer. An example is provided to illustrate how the encryption and decryption algorithms work using quadratic residues modulo a composite integer. The paper concludes that the scheme provides semantic security similar to Goldwasser-Micali under the assumption that the quadratic residuacity problem is hard.
This document presents an improved multi-SOM clustering algorithm that uses the Davies-Bouldin index to determine the optimal number of clusters. The multi-SOM algorithm iteratively clusters an initial self-organizing map (SOM) grid using the DB index at each level until the index reaches its minimum value, indicating the best number of clusters. Experimental results on five datasets show the proposed algorithm performs as well as or better than k-means, BIRCH, and a previous multi-SOM algorithm in determining the correct number of clusters.
In this paper, we propose a new method of non-adaptive LSB steganography in still images to
improve the embedding efficiency from 2 to 8/3 random bits per one embedding change even for
the embedding rate of 1 bit per pixel. The method takes 2-bits of the secret message at a time
and compares them to the LSBs of the two chosen pixel values for embedding, it always assumes
a single mismatch between the two and uses the second LSB of the first pixel value to hold the
index of the mismatch. It is shown that the proposed method outperforms the security of LSB
replacement, LSB matching, and LSB matching revisited by reducing the probability of
detection with their current targeted steganalysis methods. Other advantages of the proposed
method are reducing the overall bit-level changes to the cover image for the same amount of
embedded data and avoiding complex calculations. Finally, the new method results in little
additional distortion in the stego image, which could be tolerated.
The document proposes using generic local structures called n-gons, consisting of n neighboring minutiae points, as template points for secure fingerprint matching. N-gons are used to construct fingerprint templates that are secured using the fuzzy vault construct. The performance of different n-gon-based matching schemes is evaluated on the FVC2002-DB2 dataset in terms of the ZeroFAR metric. The best performing scheme achieved a ZeroFAR of 10.47% with no failures to capture, outperforming another popular fuzzy vault implementation on the same dataset that reported a ZeroFAR of 14% with a 2% failure to capture rate. Security considerations for the fuzzy vault scheme and limitations on the scoring function are also discussed.
In this paper, a fruit image data set is used to compare the efficiency and accuracy of two widely used Convolutional Neural Network, namely the ResNet and the DenseNet, for the recognition of 50 different kinds of fruits. In the experiment, the structure of ResNet-34 and DenseNet_BC-121 (with bottleneck layer) are used. The mathematic principle, experiment detail and the experiment result will be explained through comparison.
This document proposes an adaptive steganography technique for hiding secret data in digital images. The technique uses variable bit length embedding based on wavelet coefficients of the cover image. A logistic map is used to generate a secret key, which determines the RGB color planes and blocks used for data hiding. Wavelet coefficients are classified into ranges, and the number of bits hidden corresponds to the coefficient value range. Extraction involves selecting the same coefficients based on the key and calculating the hidden bits. The technique aims to improve security, capacity and imperceptibility compared to existing constant bit length methods. Evaluation shows the proposed method provides good security since variable bits are hidden randomly using the secret key.
AN ADAPTIVE PSEUDORANDOM STEGO-CRYPTO TECHNIQUE FOR DATA COMMUNICATIONIJCNCJournal
The document describes a proposed adaptive pseudorandom stego-crypto technique for data communication. The technique combines stream cipher cryptography with a modified pseudorandom LSB substitution technique. This provides an evenly distributed cipher text while also enhancing security through increased brute force search times and reduced time complexity by avoiding collisions during random pixel selection. The proposed method uses three parameters that are optimized through experimental analysis to minimize distortions, increase cipher text scattering, and reduce collisions and time complexity. Results demonstrate the technique maintains good perceptual quality while improving upon previous methods.
A novel secure image steganography method based on chaos theory in spatial do...ijsptm
This paper presents a novel approach of building a secure data hiding technique in digital images. The
image steganography technique takes the advantage of limited power of human visual system (HVS). It uses
image as cover media for embedding secret message. The most important requirement for a steganographic
algorithm is to be imperceptible while maximizing the size of the payload. In this paper a method is
proposed to encrypt the secret bits of the message based on chaos theory before embedding into the cover
image. A 3-3-2 LSB insertion method has been used for image steganography. Experimental results show a
substantial improvement in the Peak Signal to Noise Ratio (PSNR) and Image Fidelity (IF) value of the
proposed technique over the base technique of 3-3-2 LSB insertion.
This document summarizes a research paper on a Probabilistic Data Encryption Scheme (PDES). The paper presents a probabilistic encryption scheme that combines the security of Goldwasser and Micali's probabilistic encryption with the efficiency of deterministic schemes. The scheme is based on the assumption that solving the quadratic residuacity problem is computationally infeasible without knowing the factorization of the composite integer. An example is provided to illustrate how the encryption and decryption algorithms work using quadratic residues modulo a composite integer. The paper concludes that the scheme provides semantic security similar to Goldwasser-Micali under the assumption that the quadratic residuacity problem is hard.
This document presents an improved multi-SOM clustering algorithm that uses the Davies-Bouldin index to determine the optimal number of clusters. The multi-SOM algorithm iteratively clusters an initial self-organizing map (SOM) grid using the DB index at each level until the index reaches its minimum value, indicating the best number of clusters. Experimental results on five datasets show the proposed algorithm performs as well as or better than k-means, BIRCH, and a previous multi-SOM algorithm in determining the correct number of clusters.
In this paper, we propose a new method of non-adaptive LSB steganography in still images to
improve the embedding efficiency from 2 to 8/3 random bits per one embedding change even for
the embedding rate of 1 bit per pixel. The method takes 2-bits of the secret message at a time
and compares them to the LSBs of the two chosen pixel values for embedding, it always assumes
a single mismatch between the two and uses the second LSB of the first pixel value to hold the
index of the mismatch. It is shown that the proposed method outperforms the security of LSB
replacement, LSB matching, and LSB matching revisited by reducing the probability of
detection with their current targeted steganalysis methods. Other advantages of the proposed
method are reducing the overall bit-level changes to the cover image for the same amount of
embedded data and avoiding complex calculations. Finally, the new method results in little
additional distortion in the stego image, which could be tolerated.
The document proposes using generic local structures called n-gons, consisting of n neighboring minutiae points, as template points for secure fingerprint matching. N-gons are used to construct fingerprint templates that are secured using the fuzzy vault construct. The performance of different n-gon-based matching schemes is evaluated on the FVC2002-DB2 dataset in terms of the ZeroFAR metric. The best performing scheme achieved a ZeroFAR of 10.47% with no failures to capture, outperforming another popular fuzzy vault implementation on the same dataset that reported a ZeroFAR of 14% with a 2% failure to capture rate. Security considerations for the fuzzy vault scheme and limitations on the scoring function are also discussed.
In this paper, a fruit image data set is used to compare the efficiency and accuracy of two widely used Convolutional Neural Network, namely the ResNet and the DenseNet, for the recognition of 50 different kinds of fruits. In the experiment, the structure of ResNet-34 and DenseNet_BC-121 (with bottleneck layer) are used. The mathematic principle, experiment detail and the experiment result will be explained through comparison.
This document summarizes an article that proposes a new image steganography technique using discrete wavelet transform. The technique applies an adaptive pixel pair matching method from the spatial domain to the frequency domain. Data is embedded in the middle frequencies of the discrete wavelet transformed image because they are more robust to attacks than high frequencies. The coefficients in the low frequency sub-band are preserved unchanged to improve image quality. The experimental results showed better performance with discrete wavelet transform compared to the spatial domain.
ADAPTIVE AUTOMATA FOR GRAMMAR BASED TEXT COMPRESSIONcsandit
The Internet and the ubiquitous presence of computing devices anywhere is generating a
continuously growing amount of information. However, the information entropy is not uniform.
It allows the use of data compression algorithms to reduce the demand for more powerful
processors and larger data storage equipment. This paper presents an adaptive rule-driven
device - the adaptive automata - as the device to identify repetitive patterns to be compressed in
a grammar based lossless data compression scheme.
This paper proposes a new compressive
sensing based method for simultaneous data
compression and convergent encryption for secure
deduplication to efficiently use for the cloud storage. It
performs signal acquisition, its compression and
encryption at the same time. The measurement matrix
is generated using a hash key and is exploited for
encryption. It seems that it is very suitable for the cloud
model considering both the data security and the
storage efficiently.
Density Based Clustering Approach for Solving the Software Component Restruct...IRJET Journal
This document presents research on using the DBSCAN clustering algorithm to solve the problem of software component restructuring. It begins with an abstract that introduces DBSCAN and describes how it can group related software components. It then provides background on software component clustering and describes DBSCAN in more detail. The methodology section outlines the 4 phases of the proposed approach: data collection and processing, clustering with DBSCAN, visualization and analysis, and final restructuring. Experimental results show that DBSCAN produces more evenly distributed clusters compared to fuzzy clustering. The conclusion is that DBSCAN is a better technique for software restructuring as it can identify clusters of varying shapes and sizes without specifying the number of clusters in advance.
In recent machine learning community, there is a trend of constructing a linear logarithm version of
nonlinear version through the ‘kernel method’ for example kernel principal component analysis, kernel
fisher discriminant analysis, support Vector Machines (SVMs), and the current kernel clustering
algorithms. Typically, in unsupervised methods of clustering algorithms utilizing kernel method, a
nonlinear mapping is operated initially in order to map the data into a much higher space feature, and then
clustering is executed. A hitch of these kernel clustering algorithms is that the clustering prototype resides
in increased features specs of dimensions and therefore lack intuitive and clear descriptions without
utilizing added approximation of projection from the specs to the data as executed in the literature
presented. This paper aims to utilize the ‘kernel method’, a novel clustering algorithm, founded on the
conventional fuzzy clustering algorithm (FCM) is anticipated and known as kernel fuzzy c-means algorithm
(KFCM). This method embraces a novel kernel-induced metric in the space of data in order to interchange
the novel Euclidean matric norm in cluster prototype and fuzzy clustering algorithm still reside in the space
of data so that the results of clustering could be interpreted and reformulated in the spaces which are
original. This property is used for clustering incomplete data. Execution on supposed data illustrate that
KFCM has improved performance of clustering and stout as compare to other transformations of FCM for
clustering incomplete data.
Architecture neural network deep optimizing based on self organizing feature ...journalBEEI
Forward neural network (FNN) execution relying on the algorithm of training and architecture selection. Different parameters using for nip out the architecture of FNN such as the connections number among strata, neurons hidden number in each strata hidden and hidden strata number. Feature architectural combinations exponential could be uncontrollable manually so specific architecture can be design automatically by using special algorithm which build system with ability generalization better. Determination of architecture FNN can be done by using the algorithm of optimization numerous. In this paper methodology new proposes achievement where FNN neurons respective with hidden layers estimation work where in this work collect algorithm training self organizing feature map (SOFM) with advantages to explain how the best architectural selected automatically by SOFM from criteria error testing based on architecture populated. Different size of dataset benchmark of 4 classifications tested for approach proposed.
Vertex covering has important applications for wireless sensor networks such as monitoring link failures,
facility location, clustering, and data aggregation. In this study, we designed three algorithms for
constructing vertex cover in wireless sensor networks. The first algorithm, which is an adaption of the
Parnas & Ron’s algorithm, is a greedy approach that finds a vertex cover by using the degrees of the
nodes. The second algorithm finds a vertex cover from graph matching where Hoepman’s weighted
matching algorithm is used. The third algorithm firstly forms a breadth-first search tree and then
constructs a vertex cover by selecting nodes with predefined levels from breadth-first tree. We show the
operation of the designed algorithms, analyze them, and provide the simulation results in the TOSSIM
environment. Finally we have implemented, compared and assessed all these approaches. The transmitted
message count of the first algorithm is smallest among other algorithms where the third algorithm has
turned out to be presenting the best results in vertex cover approximation ratio.
Fixed Point Realization of Iterative LR-Aided Soft MIMO Decoding AlgorithmCSCJournals
Multiple-input multiple-output (MIMO) systems have been widely acclaimed in order to provide high data rates. Recently Lattice Reduction (LR) aided detectors have been proposed to achieve near Maximum Likelihood (ML) performance with low complexity. In this paper, we develop the fixed point design of an iterative soft decision based LR-aided K-best decoder, which reduces the complexity of existing sphere decoder. A simulation based word-length optimization is presented for physical implementation of the K-best decoder. Simulations show that the fixed point result of 16 bit precision can keep bit error rate (BER) degradation within 0.3 dB for 8×8 MIMO systems with different modulation schemes.
Non-Causal Video Encoding Method of P-FrameIDES Editor
In this paper, the feasibility and efficiency of noncausal
prediction in a P-frame is examined, and based on the
findings, a new P-frame coding scheme is proposed. Motioncompensated
inter-frame prediction, which has been used
widely in low-bit-rate television coding, is an efficient method
to reduce the temporal redundancy in a sequence of video
signals. Therefore, the proposed scheme combines motion
compensation with non-causal prediction based on an interpolative,
but not Markov, representation. However, energy
dispersion occurs in the scheme as a result of the interpolative
prediction transform matrix being non-orthogonal. To
solve this problem, we have introduced a new conditional pel
replenishment method. On the other hand, Rotation Scanning
is also applied as feedback quantization is the quantizer
in this paper. Simulation results show that the proposed coding
scheme achieves an approximate 0.3–2 dB improvement
when the entropy is similar to the traditional hybrid coding
method.
Scalable Rough C-Means clustering using Firefly algorithm..................................................................1
Abhilash Namdev and B.K. Tripathy
Significance of Embedded Systems to IoT................................................................................................. 15
P. R. S. M. Lakshmi, P. Lakshmi Narayanamma and K. Santhi Sri
Cognitive Abilities, Information Literacy Knowledge and Retrieval Skills of Undergraduates: A
Comparison of Public and Private Universities in Nigeria ........................................................................ 24
Janet O. Adekannbi and Testimony Morenike Oluwayinka
Risk Assessment in Constructing Horseshoe Vault Tunnels using Fuzzy Technique................................ 48
Erfan Shafaghat and Mostafa Yousefi Rad
Evaluating the Adoption of Deductive Database Technology in Augmenting Criminal Intelligence in
Zimbabwe: Case of Zimbabwe Republic Police......................................................................................... 68
Mahlangu Gilbert, Furusa Samuel Simbarashe, Chikonye Musafare and Mugoniwa Beauty
Analysis of Petrol Pumps Reachability in Anand District of Gujarat ....................................................... 77
Nidhi Arora
OPTICS is an algorithm that identifies variable density clusters without specifying the distance parameter (eps) required by DBSCAN. It creates an ordering of database objects based on their core and reachability distances. This ordering represents the clustering structure and density connectivity of the data. OPTICS extends DBSCAN by processing multiple distance parameters simultaneously. It stores an object's core distance and reachability distance to assign cluster memberships based on density reachability.
DESIGN AND IMPLEMENTATION OF BINARY NEURAL NETWORK LEARNING WITH FUZZY CLUSTE...cscpconf
In this paper, Design and Implementation of Binary Neural Network Learning with Fuzzy
Clustering (DIBNNFC), is proposed to classify semisupervised data, it is based on the
concept of binary neural network and geometrical expansion. Parameters are updated
according to the geometrical location of the training samples in the input space, and each
sample in the training set is learned only once. It’s a semisupervised based approach, the
training samples are semi-labelled i.e. for some samples, labels are known and for some
samples data labels are not known. The method starts with classification, which is done by
using the concept of ETL algorithm. In classification process various classes are formed.
These classes classify samples in to two classes after that considers each class as a region and calculates the average of the entire region separately. This average is centres of the region which is used for the purpose of clustering by using FCM algorithm. Once clustering process over labelling of semi supervised data is done, then whole samples would be classify by (DIBNNFC). The method proposes here is exhaustively tested with different benchmark datasets and it is found that, on increasing value of training parameters number of hidden neurons and training time both are getting decrease. The result reported, using real character recognition data set and result will compare with existing semi-supervised classifier, the proposed approach learned with semi-supervised leads to higher classification accuracy.
1) The document proposes a technique that allows cryptography and steganography to be combined by encrypting text data and hiding it within the least significant bits of pixels in an image.
2) It discusses using the Advanced Encryption Standard (AES) algorithm to encrypt the text before embedding it in the selected areas of an image.
3) For decryption, the receiver must know the correct encryption key in order to extract the hidden text from the stego-image and decrypt it back to the original plaintext.
An Efficient Clustering Method for Aggregation on Data FragmentsIJMER
Clustering is an important step in the process of data analysis with applications to numerous fields. Clustering ensembles, has emerged as a powerful technique for combining different clustering results to obtain a quality cluster. Existing clustering aggregation algorithms are applied directly to large number of data points. The algorithms are inefficient if the number of data points is large. This project defines an efficient approach for clustering aggregation based on data fragments. In fragment-based approach, a data fragment is any subset of the data. To increase the efficiency of the proposed approach, the clustering aggregation can be performed directly on data fragments under comparison measure and normalized mutual information measures for clustering aggregation, enhanced clustering aggregation algorithms are described. To show the minimal computational complexity. (Agglomerative, Furthest, and Local Search); nevertheless, which increases the accuracy.
Elgamal signature for content distribution with network codingijwmn
This document proposes a scheme that uses ElGamal signature in network coding to enhance security. Network coding allows nodes to generate output packets as linear combinations of input packets. However, this makes the network vulnerable to pollution attacks where malicious nodes can insert corrupted packets. The proposed scheme signs data packets with ElGamal signatures. When nodes receive packets, they can verify the signatures' validity to check for corrupted packets without decoding. The scheme exploits the linearity of network coding and allows nodes to easily check packet integrity. An example is provided to demonstrate how the ElGamal signature scheme would work in the context of network coding for content distribution.
We propose an algorithm for training Multi Layer Preceptrons for classification problems, that we named Hidden Layer Learning Vector Quantization (H-LVQ). It consists of applying Learning Vector Quantization to the last hidden layer of a MLP and it gave very successful results on problems containing a large number of correlated inputs. It was applied with excellent results on classification of Rurtherford
backscattering spectra and on a benchmark problem of image recognition. It may also be used for efficient feature extraction.
A hybrid method for designing fiber bragg gratings with right angled triangul...Andhika Pratama
This document proposes a hybrid method for designing fiber Bragg gratings (FBGs) with right-angled triangular spectra using the discrete layer peeling (DLP) approach and quantum-behaved particle swarm optimization (QPSO) algorithm. The DLP approach is used to generate an initial guess of the complex coupling coefficients. Then the QPSO technique optimizes the initial coefficients by minimizing the mean squared error between the target and computed reflectivity spectra. Simulation results show the method can design single and multi-channel right-angled triangular spectrum FBGs with linear edges and spectra consistent with the target.
Automatic power generation control structure for smart electrical power gridseSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Human action recognition using local space time features and adaboost svmeSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Exposure to elevated temperatures and cooled under different regimes – a stud...eSAT Publishing House
This study examined the effects of elevated temperatures and different cooling regimes on blended concrete. Concrete cubes containing 30% ground granulated blast furnace slag were subjected to temperatures from 150°C to 550°C and cooled via furnace cooling, air cooling, or sudden water cooling. Weight loss and residual compressive and split tensile strengths were then tested. Results showed that weight and strengths decreased significantly with higher temperatures and depended strongly on the cooling method, with furnace cooling producing the best retention of properties. Furnace and air cooling resulted in gradual heat loss while sudden cooling induced thermal shock. This research provides information about fire-damaged concrete structures and their residual performance.
Growth and physical properties of pure and manganese doped strontium tartrate...eSAT Publishing House
The document summarizes the growth and characterization of pure and manganese-doped strontium tartrate trihydrate single crystals. Crystals were grown using the single diffusion gel growth technique by varying parameters like pH, concentration, and time. Crystals up to 10x5x3 mm in size were obtained. The crystals were characterized through techniques like PXRD, SXRD, FTIR, UV-Vis, PL, SHG, AAS, hardness, and TGA measurements. The crystals were found to be monoclinic, optically transparent, mechanically soft, and thermally stable up to 100°C. Manganese doping was found to increase PL yield and SHG efficiency.
This document summarizes an article that proposes a new image steganography technique using discrete wavelet transform. The technique applies an adaptive pixel pair matching method from the spatial domain to the frequency domain. Data is embedded in the middle frequencies of the discrete wavelet transformed image because they are more robust to attacks than high frequencies. The coefficients in the low frequency sub-band are preserved unchanged to improve image quality. The experimental results showed better performance with discrete wavelet transform compared to the spatial domain.
ADAPTIVE AUTOMATA FOR GRAMMAR BASED TEXT COMPRESSIONcsandit
The Internet and the ubiquitous presence of computing devices anywhere is generating a
continuously growing amount of information. However, the information entropy is not uniform.
It allows the use of data compression algorithms to reduce the demand for more powerful
processors and larger data storage equipment. This paper presents an adaptive rule-driven
device - the adaptive automata - as the device to identify repetitive patterns to be compressed in
a grammar based lossless data compression scheme.
This paper proposes a new compressive
sensing based method for simultaneous data
compression and convergent encryption for secure
deduplication to efficiently use for the cloud storage. It
performs signal acquisition, its compression and
encryption at the same time. The measurement matrix
is generated using a hash key and is exploited for
encryption. It seems that it is very suitable for the cloud
model considering both the data security and the
storage efficiently.
Density Based Clustering Approach for Solving the Software Component Restruct...IRJET Journal
This document presents research on using the DBSCAN clustering algorithm to solve the problem of software component restructuring. It begins with an abstract that introduces DBSCAN and describes how it can group related software components. It then provides background on software component clustering and describes DBSCAN in more detail. The methodology section outlines the 4 phases of the proposed approach: data collection and processing, clustering with DBSCAN, visualization and analysis, and final restructuring. Experimental results show that DBSCAN produces more evenly distributed clusters compared to fuzzy clustering. The conclusion is that DBSCAN is a better technique for software restructuring as it can identify clusters of varying shapes and sizes without specifying the number of clusters in advance.
In recent machine learning community, there is a trend of constructing a linear logarithm version of
nonlinear version through the ‘kernel method’ for example kernel principal component analysis, kernel
fisher discriminant analysis, support Vector Machines (SVMs), and the current kernel clustering
algorithms. Typically, in unsupervised methods of clustering algorithms utilizing kernel method, a
nonlinear mapping is operated initially in order to map the data into a much higher space feature, and then
clustering is executed. A hitch of these kernel clustering algorithms is that the clustering prototype resides
in increased features specs of dimensions and therefore lack intuitive and clear descriptions without
utilizing added approximation of projection from the specs to the data as executed in the literature
presented. This paper aims to utilize the ‘kernel method’, a novel clustering algorithm, founded on the
conventional fuzzy clustering algorithm (FCM) is anticipated and known as kernel fuzzy c-means algorithm
(KFCM). This method embraces a novel kernel-induced metric in the space of data in order to interchange
the novel Euclidean matric norm in cluster prototype and fuzzy clustering algorithm still reside in the space
of data so that the results of clustering could be interpreted and reformulated in the spaces which are
original. This property is used for clustering incomplete data. Execution on supposed data illustrate that
KFCM has improved performance of clustering and stout as compare to other transformations of FCM for
clustering incomplete data.
Architecture neural network deep optimizing based on self organizing feature ...journalBEEI
Forward neural network (FNN) execution relying on the algorithm of training and architecture selection. Different parameters using for nip out the architecture of FNN such as the connections number among strata, neurons hidden number in each strata hidden and hidden strata number. Feature architectural combinations exponential could be uncontrollable manually so specific architecture can be design automatically by using special algorithm which build system with ability generalization better. Determination of architecture FNN can be done by using the algorithm of optimization numerous. In this paper methodology new proposes achievement where FNN neurons respective with hidden layers estimation work where in this work collect algorithm training self organizing feature map (SOFM) with advantages to explain how the best architectural selected automatically by SOFM from criteria error testing based on architecture populated. Different size of dataset benchmark of 4 classifications tested for approach proposed.
Vertex covering has important applications for wireless sensor networks such as monitoring link failures,
facility location, clustering, and data aggregation. In this study, we designed three algorithms for
constructing vertex cover in wireless sensor networks. The first algorithm, which is an adaption of the
Parnas & Ron’s algorithm, is a greedy approach that finds a vertex cover by using the degrees of the
nodes. The second algorithm finds a vertex cover from graph matching where Hoepman’s weighted
matching algorithm is used. The third algorithm firstly forms a breadth-first search tree and then
constructs a vertex cover by selecting nodes with predefined levels from breadth-first tree. We show the
operation of the designed algorithms, analyze them, and provide the simulation results in the TOSSIM
environment. Finally we have implemented, compared and assessed all these approaches. The transmitted
message count of the first algorithm is smallest among other algorithms where the third algorithm has
turned out to be presenting the best results in vertex cover approximation ratio.
Fixed Point Realization of Iterative LR-Aided Soft MIMO Decoding AlgorithmCSCJournals
Multiple-input multiple-output (MIMO) systems have been widely acclaimed in order to provide high data rates. Recently Lattice Reduction (LR) aided detectors have been proposed to achieve near Maximum Likelihood (ML) performance with low complexity. In this paper, we develop the fixed point design of an iterative soft decision based LR-aided K-best decoder, which reduces the complexity of existing sphere decoder. A simulation based word-length optimization is presented for physical implementation of the K-best decoder. Simulations show that the fixed point result of 16 bit precision can keep bit error rate (BER) degradation within 0.3 dB for 8×8 MIMO systems with different modulation schemes.
Non-Causal Video Encoding Method of P-FrameIDES Editor
In this paper, the feasibility and efficiency of noncausal
prediction in a P-frame is examined, and based on the
findings, a new P-frame coding scheme is proposed. Motioncompensated
inter-frame prediction, which has been used
widely in low-bit-rate television coding, is an efficient method
to reduce the temporal redundancy in a sequence of video
signals. Therefore, the proposed scheme combines motion
compensation with non-causal prediction based on an interpolative,
but not Markov, representation. However, energy
dispersion occurs in the scheme as a result of the interpolative
prediction transform matrix being non-orthogonal. To
solve this problem, we have introduced a new conditional pel
replenishment method. On the other hand, Rotation Scanning
is also applied as feedback quantization is the quantizer
in this paper. Simulation results show that the proposed coding
scheme achieves an approximate 0.3–2 dB improvement
when the entropy is similar to the traditional hybrid coding
method.
Scalable Rough C-Means clustering using Firefly algorithm..................................................................1
Abhilash Namdev and B.K. Tripathy
Significance of Embedded Systems to IoT................................................................................................. 15
P. R. S. M. Lakshmi, P. Lakshmi Narayanamma and K. Santhi Sri
Cognitive Abilities, Information Literacy Knowledge and Retrieval Skills of Undergraduates: A
Comparison of Public and Private Universities in Nigeria ........................................................................ 24
Janet O. Adekannbi and Testimony Morenike Oluwayinka
Risk Assessment in Constructing Horseshoe Vault Tunnels using Fuzzy Technique................................ 48
Erfan Shafaghat and Mostafa Yousefi Rad
Evaluating the Adoption of Deductive Database Technology in Augmenting Criminal Intelligence in
Zimbabwe: Case of Zimbabwe Republic Police......................................................................................... 68
Mahlangu Gilbert, Furusa Samuel Simbarashe, Chikonye Musafare and Mugoniwa Beauty
Analysis of Petrol Pumps Reachability in Anand District of Gujarat ....................................................... 77
Nidhi Arora
OPTICS is an algorithm that identifies variable density clusters without specifying the distance parameter (eps) required by DBSCAN. It creates an ordering of database objects based on their core and reachability distances. This ordering represents the clustering structure and density connectivity of the data. OPTICS extends DBSCAN by processing multiple distance parameters simultaneously. It stores an object's core distance and reachability distance to assign cluster memberships based on density reachability.
DESIGN AND IMPLEMENTATION OF BINARY NEURAL NETWORK LEARNING WITH FUZZY CLUSTE...cscpconf
In this paper, Design and Implementation of Binary Neural Network Learning with Fuzzy
Clustering (DIBNNFC), is proposed to classify semisupervised data, it is based on the
concept of binary neural network and geometrical expansion. Parameters are updated
according to the geometrical location of the training samples in the input space, and each
sample in the training set is learned only once. It’s a semisupervised based approach, the
training samples are semi-labelled i.e. for some samples, labels are known and for some
samples data labels are not known. The method starts with classification, which is done by
using the concept of ETL algorithm. In classification process various classes are formed.
These classes classify samples in to two classes after that considers each class as a region and calculates the average of the entire region separately. This average is centres of the region which is used for the purpose of clustering by using FCM algorithm. Once clustering process over labelling of semi supervised data is done, then whole samples would be classify by (DIBNNFC). The method proposes here is exhaustively tested with different benchmark datasets and it is found that, on increasing value of training parameters number of hidden neurons and training time both are getting decrease. The result reported, using real character recognition data set and result will compare with existing semi-supervised classifier, the proposed approach learned with semi-supervised leads to higher classification accuracy.
1) The document proposes a technique that allows cryptography and steganography to be combined by encrypting text data and hiding it within the least significant bits of pixels in an image.
2) It discusses using the Advanced Encryption Standard (AES) algorithm to encrypt the text before embedding it in the selected areas of an image.
3) For decryption, the receiver must know the correct encryption key in order to extract the hidden text from the stego-image and decrypt it back to the original plaintext.
An Efficient Clustering Method for Aggregation on Data FragmentsIJMER
Clustering is an important step in the process of data analysis with applications to numerous fields. Clustering ensembles, has emerged as a powerful technique for combining different clustering results to obtain a quality cluster. Existing clustering aggregation algorithms are applied directly to large number of data points. The algorithms are inefficient if the number of data points is large. This project defines an efficient approach for clustering aggregation based on data fragments. In fragment-based approach, a data fragment is any subset of the data. To increase the efficiency of the proposed approach, the clustering aggregation can be performed directly on data fragments under comparison measure and normalized mutual information measures for clustering aggregation, enhanced clustering aggregation algorithms are described. To show the minimal computational complexity. (Agglomerative, Furthest, and Local Search); nevertheless, which increases the accuracy.
Elgamal signature for content distribution with network codingijwmn
This document proposes a scheme that uses ElGamal signature in network coding to enhance security. Network coding allows nodes to generate output packets as linear combinations of input packets. However, this makes the network vulnerable to pollution attacks where malicious nodes can insert corrupted packets. The proposed scheme signs data packets with ElGamal signatures. When nodes receive packets, they can verify the signatures' validity to check for corrupted packets without decoding. The scheme exploits the linearity of network coding and allows nodes to easily check packet integrity. An example is provided to demonstrate how the ElGamal signature scheme would work in the context of network coding for content distribution.
We propose an algorithm for training Multi Layer Preceptrons for classification problems, that we named Hidden Layer Learning Vector Quantization (H-LVQ). It consists of applying Learning Vector Quantization to the last hidden layer of a MLP and it gave very successful results on problems containing a large number of correlated inputs. It was applied with excellent results on classification of Rurtherford
backscattering spectra and on a benchmark problem of image recognition. It may also be used for efficient feature extraction.
A hybrid method for designing fiber bragg gratings with right angled triangul...Andhika Pratama
This document proposes a hybrid method for designing fiber Bragg gratings (FBGs) with right-angled triangular spectra using the discrete layer peeling (DLP) approach and quantum-behaved particle swarm optimization (QPSO) algorithm. The DLP approach is used to generate an initial guess of the complex coupling coefficients. Then the QPSO technique optimizes the initial coefficients by minimizing the mean squared error between the target and computed reflectivity spectra. Simulation results show the method can design single and multi-channel right-angled triangular spectrum FBGs with linear edges and spectra consistent with the target.
Automatic power generation control structure for smart electrical power gridseSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Human action recognition using local space time features and adaboost svmeSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Exposure to elevated temperatures and cooled under different regimes – a stud...eSAT Publishing House
This study examined the effects of elevated temperatures and different cooling regimes on blended concrete. Concrete cubes containing 30% ground granulated blast furnace slag were subjected to temperatures from 150°C to 550°C and cooled via furnace cooling, air cooling, or sudden water cooling. Weight loss and residual compressive and split tensile strengths were then tested. Results showed that weight and strengths decreased significantly with higher temperatures and depended strongly on the cooling method, with furnace cooling producing the best retention of properties. Furnace and air cooling resulted in gradual heat loss while sudden cooling induced thermal shock. This research provides information about fire-damaged concrete structures and their residual performance.
Growth and physical properties of pure and manganese doped strontium tartrate...eSAT Publishing House
The document summarizes the growth and characterization of pure and manganese-doped strontium tartrate trihydrate single crystals. Crystals were grown using the single diffusion gel growth technique by varying parameters like pH, concentration, and time. Crystals up to 10x5x3 mm in size were obtained. The crystals were characterized through techniques like PXRD, SXRD, FTIR, UV-Vis, PL, SHG, AAS, hardness, and TGA measurements. The crystals were found to be monoclinic, optically transparent, mechanically soft, and thermally stable up to 100°C. Manganese doping was found to increase PL yield and SHG efficiency.
Impact resistance capacity of a green ultra high performance hybrid fibre rei...eSAT Publishing House
This document summarizes a study on the impact resistance capacity of an ultra-high performance hybrid fibre reinforced concrete (UHPHFRC). The concrete mixtures were designed using a modified particle packing model to achieve a dense matrix using a relatively low binder content of around 620 kg/m3. Steel fibres were added in varying proportions of long and short fibres. Impact testing was performed using a modified Charpy test and the results showed that long steel fibres improved the impact resistance capacity the most. Additionally, the failure mechanism under impact loading was analyzed and a model was developed that could predict the energy absorption capacity of the UHPHFRC samples.
Design of digital signature verification algorithm using relative slope methodeSAT Publishing House
This document summarizes a research paper that proposes a new algorithm for signature verification using a digital pen. The algorithm analyzes the relative slopes of a signature's segments to determine if a signature matches one stored in a database. It works by segmenting the signature, calculating the slope of each segment relative to the previous one, and storing these slope values. During verification, it compares the stored and input slope values, alongside other dynamic features like writing speed and pressure, and determines a match percentage. The paper finds that this relative slope method improves the accuracy and parameters of previous signature verification systems.
1. Pounding is a major cause of damage to adjacent buildings during earthquakes when they are constructed close together without sufficient separation.
2. The study analyzes seismic pounding forces between buildings of different heights and floor levels using software. It finds that pounding damage increases when buildings have different dynamic properties or are inadequately separated.
3. Pounding can cause non-structural to severe structural damage. The required minimum separation between buildings according to codes is 15-30 mm depending on building type, but may need to be larger.
Vibration analysis and diagnostics for oil production units by pumping rodeSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
This document discusses cell search and synchronization procedures in Long Term Evolution (LTE) cellular networks. It describes how a mobile unit detects and locks onto base stations when powering on or moving between cells. The mobile unit first detects the Primary Synchronization Signal (PSS) to synchronize on the subframe boundary and determine the physical layer cell identity. It then detects the Secondary Synchronization Signal (SSS) to obtain the cell identity group number and complete cell ID detection. Simulations are presented showing the probability of detecting neighboring cells under different channel conditions and signal-to-noise ratios, with better detection at cell centers compared to edges.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
This document discusses investigating the behavior of a 3 degree of freedom spring mass damper system subjected to transient loads. It presents two models of the system with different damper configurations and derives the governing equations. The velocities and energies (kinetic and potential) of the oscillators are estimated by solving the equations for an exponential decaying, constant, and partial load over time. The results show the contribution of kinetic energy is minimal for oscillator 2 in all cases, while potential energy and contributions from oscillators 1 and 3 depend on the load type.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
This document discusses and compares various lookup algorithms that can be used for IPv6 packet forwarding at speeds over 100 Gbps. It begins by introducing the need for IPv6 due to IPv4 address exhaustion and issues with forwarding IPv6 packets due to its larger 128-bit address size. It then summarizes four existing lookup algorithms - Distributed Memory Organizations, TrieC, Recursive Balanced Multi-way range trees, and Range tree based IPv6 lookup. The document aims to determine the best algorithm by performing a comparative analysis based on parameters like latency, throughput, memory requirements and scalability.
The document presents a comparative analysis of two text segmentation algorithms, C99 and TopicTiling, that are applied to extract natural text from image documents. It first discusses related work on text segmentation techniques. It then provides an overview of the two-phase implementation: 1) text is extracted from images using preprocessing, thresholding, boundary detection and text recognition, and 2) the extracted text is segmented using C99 and TopicTiling, and the results of each are compared. The analysis shows that TopicTiling performs more efficiently than C99 at segmenting text from images.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
A comparative study of physical attacks on wireless sensor networkseSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Research Inventy : International Journal of Engineering and Scienceresearchinventy
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
A Spatial Domain Image Steganography Technique Based on Matrix Embedding and ...CSCJournals
This paper presents an algorithm in spatial domain which gives less distortion to the cover image during embedding process. Minimizing embedding impact and maximizing embedding capacity are the key factors of any steganography algorithm. Peak Signal to Noise Ratio (PSNR) is the familiar metric used in discriminating the distorted image (stego image) and cover image. Here matrix embedding technique is chosen to embed the secret image which is initially Huffman encoded. The Huffman encoded image is overlaid on the selected bits of all the channels of pixels of cover image through matrix embedding. As a result, the stego image is constructed with very less distortion when compared to the cover image ends up with higher PSNR value. A secret image which cannot be embedded in a normal LSB embedding technique can be overlaid in this proposed technique since the secret image is Huffman encoded. Experimental results for standard cover images, which obtained higher PSNR value during the operation is shown in this paper.
Intra Block and Inter Block Neighboring Joint Density Based Approach for Jpeg...ijsc
Steganalysis is the method used to detect the presence of any hidden message in a cover medium. A novel approach based on feature mining on the discrete cosine transform (DCT) domain based approach, machine learning for steganalysis of JPEG images is proposed. The neighboring joint density on both intra-block and inter-block are extracted from the DCT coefficient array. After the feature space has been constructed, it uses SVM like binary classifier for training and classification. The performance of the proposed method on different Steganographic systems named F5, Pixel Value Differencing, Model Based Steganography with and without deblocking, JPHS, Steghide etc are analyzed. Individually each feature and combined features classification accuracy is checked and concludes which provides better classification.
INTRA BLOCK AND INTER BLOCK NEIGHBORING JOINT DENSITY BASED APPROACH FOR JPEG...ijsc
Steganalysis is the method used to detect the presence of any hidden message in a cover medium. A novel
approach based on feature mining on the discrete cosine transform (DCT) domain based approach,
machine learning for steganalysis of JPEG images is proposed. The neighboring joint density on both
intra-block and inter-block are extracted from the DCT coefficient array. After the feature space has been
constructed, it uses SVM like binary classifier for training and classification. The performance of the
proposed method on different Steganographic systems named F5, Pixel Value Differencing, Model Based
Steganography with and without deblocking, JPHS, Steghide etc are analyzed. Individually each feature
and combined features classification accuracy is checked and concludes which provides better
classification.
This document describes a steganographic method based on integer wavelet transform and genetic algorithm. The proposed method embeds secret messages into the integer wavelet transform coefficients of images. A genetic algorithm is used to generate an optimal mapping function for embedding bits into coefficients in 8x8 blocks. After embedding, an optimal pixel adjustment process is applied to minimize differences between the original and embedded images. Experimental results on Lena and Baboon images show the proposed method achieves higher data hiding capacity and PSNR values than previous related work.
The document summarizes an adaptive image steganography technique that embeds secret messages into digital images. It proposes using adaptive quantization embedding, where quantization steps for image blocks are optimized to guarantee more data can be embedded in busy image areas with high contrast. The technique embeds adaptive quantization parameters and message bits into the cover image using a difference expanding algorithm. Simulation results showed the proposed scheme can provide a good balance between imperceptibility and embedding capacity.
Steganographic Scheme Based on Message-Cover matchingIJECEIAES
Steganography is one of the techniques that enter into the field of information security, it is the art of dissimulating data into digital files in an imperceptible way that does not arise the suspicion. In this paper, a steganographic method based on the FaberSchauder discrete wavelet transform is proposed. The embedding of the secret data is performed in Least Significant Bit (LSB) of the integer part of the wavelet coefficients. The secret message is decomposed into pairs of bits, then each pair is transformed into another based on a permutation that allows to obtain the most matches possible between the message and the LSB of the coefficients. To assess the performance of the proposed method, experiments were carried out on a large set of images, and a comparison to prior works is accomplished. Results show a good level of imperceptibility and a good trade-off imperceptibility-capacity compared to literature.
Hiding text in speech signal using K-means, LSB techniques and chaotic maps IJECEIAES
In this paper, a new technique that hides a secret text inside a speech signal without any apparent noise is presented. The technique for encoding the secret text is through first scrambling the text using Chaotic Map, then encoding the scraped text using the Zaslavsky map, and finally hiding the text by breaking the speech signal into blocks and using only half of each block with the LSB, K-means algorithms. The measures (SNR, PSNR, Correlation, SSIM, and MSE) are used on various speech files (“.WAV”), and various secret texts. We observed that the suggested technique offers high security (SNR, PSNR, Correlation, and SSIM) of an encrypted text with low error (MSE). This indicates that the noise level in the speech signal is very low and the speech purity is high, so the suggested method is effective for embedding encrypted text into speech files.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
S.A.kalaiselvan- robust video data hiding at forbidden zonekalaiselvanresearch
This document summarizes a proposed video data hiding method that uses selective embedding and error correction coding to make the data hiding robust against attacks. The method uses forbidden zone data hiding to embed data in selected DCT coefficients of video frames. Adaptive coefficient selection is used to determine the best coefficients for embedding. Repeat accumulate codes are then used to encode the data bits and provide error correction against desynchronization caused by selective embedding or attacks. Frame synchronization markers are also embedded to detect attacks like frame dropping. The proposed method was found to successfully embed data in video and withstand various attacks through simulation tests.
This document summarizes the F5 steganography algorithm and the JHRF steganalysis algorithm. F5 embeds messages into JPEG images by decreasing quantized DCT coefficients. It uses techniques like permutative straddling and matrix encoding to distribute changes regularly. JHRF is designed to detect messages hidden with F5. It analyzes DCT coefficients in the frequency domain, looking for patterns that indicate embedding with normal or reverse rules around a demarcation point. The goal of JHRF is to effectively detect hidden data and obtain accurate extraction results.
Modifications in lsb based steganographyAslesha Niki
This document discusses steganography techniques for hiding secret information in digital images. It describes the Least Significant Bit (LSB) substitution method, where bits of the secret message are embedded in the LSBs of pixel values. However, this can be detected through statistical analysis of the image histogram. To address this, later techniques aim to preserve the cover image histogram by embedding extra bits as needed. The document also discusses quantizing audio signals for embedding and using linear feedback shift registers to generate encryption keystreams.
This document summarizes a research article that proposes a new image steganography technique based on ant colony optimization (ACO) algorithm. The technique aims to increase data hiding capacity and optimize image quality. It uses integer wavelet transform to transform image coefficients, then applies an ACO algorithm to find optimal coefficients for embedding secret data. Experimental results on sample images showed no visual difference between original and stego images, demonstrating the technique increases robustness and embedding capacity without degrading image quality. The proposed technique is compared to existing methods based on peak signal-to-noise ratio, showing it provides higher PSNR values and better image quality.
IRJET- Data Embedding using Image SteganographyIRJET Journal
This document presents a method for secure communication using both steganography and cryptography techniques. It discusses embedding encrypted text into an image using discrete wavelet transform (DWT). Specifically, it first encrypts a text message using the advanced encryption standard (AES) algorithm. It then embeds the encrypted text into an image by applying DWT to decompose the image into sub-bands and hiding the data in the high frequency sub-bands. MATLAB is used to implement a graphical user interface that allows a sender to encrypt a message, embed it into an image, and send the stego-image to a receiver. The receiver interface extracts the encrypted text from the stego-image and decrypts it using AES. The method aims to
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
This document summarizes a research paper on a Tree Based Parity Check (TBPC) scheme for data hiding. The TBPC scheme aims to reduce distortion when hiding data in a cover object like an image. It works by constructing a master tree from the cover object's bits and deriving a master string. The message is then hidden by XORing it with the master string to get a toggle string. A toggle tree is constructed from this and XORed with the master tree to get the stego object. The paper proposes a majority vote strategy for building the toggle tree that uses the minimum number of 1s, reducing distortion. Experimental results show the TBPC scheme effectively hides large payloads with minimal distortion.
DCT based Steganographic Evaluation parameter analysis in Frequency domain by...IOSR Journals
This document analyzes DCT-based steganography using a modified JPEG luminance quantization table to improve evaluation parameters like PSNR, mean square error, and capacity. The authors propose modifying the default 8x8 quantization table by adjusting frequency values in 4 bands to increase image quality for the embedded stego image. Experimental results on test images show that using the modified table improves PSNR, decreases mean square error, and increases maximum embedding capacity compared to the default table. Therefore, the proposed method allows more secret data to be hidden with less distortion and improved image quality.
This document analyzes DCT-based steganography using a modified JPEG luminance quantization table to improve embedding capacity and image quality. The authors propose modifying the default 8x8 quantization table by changing frequency values to increase the peak signal-to-noise ratio and capacity while decreasing the mean square error of embedded images. Experimental results on test images show increased capacity, PSNR and reduced error when using the modified versus default table, indicating improved stego image quality. The proposed method aims to securely embed more data with less distortion than traditional DCT-based steganography.
SVD BASED LATENT SEMANTIC INDEXING WITH USE OF THE GPU COMPUTATIONSijscmcj
The purpose of this article is to determine the usefulness of the Graphics Processing Unit (GPU) calculations used to implement the Latent Semantic Indexing (LSI) reduction of the TERM-BY DOCUMENT matrix. Considered reduction of the matrix is based on the use of the SVD (Singular Value Decomposition) decomposition. A high computational complexity of the SVD decomposition - O(n3), causes that a reduction of a large indexing structure is a difficult task. In this article there is a comparison of the time complexity and accuracy of the algorithms implemented for two different environments. The first environment is associated with the CPU and MATLAB R2011a. The second environment is related to graphics processors and the CULA library. The calculations were carried out on generally available benchmark matrices, which were combined to achieve the resulting matrix of high size. For both considered environments computations were performed for double and single precision data.
A High Throughput CFA AES S-Box with Error Correction CapabilityIOSR Journals
The document describes a proposed method for implementing a fault tolerant Advanced Encryption Standard (AES) using a Hamming error correction code. AES operates by performing rounds of transformations on blocks of data, with the most complex step being the SubBytes transformation which involves calculating multiplicative inverses in GF(28). The proposed method uses composite field arithmetic to more efficiently calculate these inverses. It also applies a (12,8) Hamming error correction code to each byte before and after processing to detect and correct single bit errors caused by radiation events, improving reliability for satellite communications. The parity check bits for the Hamming code are precalculated and stored for the AES S-box lookup tables.
Similar to Stegnography of high embedding efficiency by using an extended matrix encoding algorithm (20)
Hudhud cyclone caused extensive damage in Visakhapatnam, India in October 2014, especially to tree cover. This will likely impact the local environment in several ways: increased air pollution as trees absorb less; higher temperatures without tree canopy; increased erosion and landslides. It also created large amounts of waste from destroyed trees. Proper management of solid waste is needed to prevent disease spread. Suggested measures include restoring damaged plants, building fountains to reduce heat, mandating light-colored buildings, improving waste management, and educating public on health risks. Overall, changes are needed to water, land, and waste practices to rebuild the environment after the cyclone removed green cover.
Impact of flood disaster in a drought prone area – case study of alampur vill...eSAT Publishing House
1) In September-October 2009, unprecedented heavy rainfall and dam releases caused widespread flooding in Alampur village in Mahabub Nagar district, a historically drought-prone area.
2) The flood damaged or destroyed homes, buildings, infrastructure, crops, and documents. It displaced many residents and cut off the village.
3) The socioeconomic conditions and mud-based construction of homes in the village exacerbated the flood's impacts, making damage more severe and recovery more difficult.
The document summarizes the Hudhud cyclone that struck Visakhapatnam, India in October 2014. It describes the cyclone's formation, rapid intensification to winds of 175 km/h, and landfall near Visakhapatnam. The cyclone caused extensive damage estimated at over $1 billion and at least 109 deaths in India and Nepal. Infrastructure like buildings, bridges, and power lines were destroyed. Crops and fishing boats were also damaged. The document then discusses coping strategies and improvements needed to disaster management plans to better prepare for future cyclones.
Groundwater investigation using geophysical methods a case study of pydibhim...eSAT Publishing House
This document summarizes the results of a geophysical investigation using vertical electrical sounding (VES) methods at 13 locations around an industrial area in India. The VES data was interpreted to generate geo-electric sections and pseudo-sections showing subsurface resistivity variations. Three main layers were typically identified - a high resistivity topsoil, a weathered middle layer, and a basement rock. Pseudo-sections revealed relatively more weathered areas in the northwest and southwest. Resistivity sections helped identify zones of possible high groundwater potential based on low resistivity anomalies sandwiched between more resistive layers. The study concluded the electrical resistivity method was useful for understanding subsurface geology and identifying areas prospective for groundwater exploration.
Flood related disasters concerned to urban flooding in bangalore, indiaeSAT Publishing House
1. The document discusses urban flooding in Bangalore, India. It describes how factors like heavy rainfall, population growth, and improper land use have contributed to increased flooding in the city.
2. Flooding events in 2013 are analyzed in detail. A November rainfall caused runoff six times higher than the drainage capacity, inundating low-lying residential areas.
3. Impacts of urban flooding include disrupted daily life, damaged infrastructure, and decreased economic activity in affected areas. The document calls for improved flood management strategies to better mitigate urban flooding risks in Bangalore.
Enhancing post disaster recovery by optimal infrastructure capacity buildingeSAT Publishing House
This document discusses enhancing post-disaster recovery through optimal infrastructure capacity building. It presents a model to minimize the cost of meeting demand using auxiliary capacities when disaster damages infrastructure. The model uses genetic algorithms to select optimal capacity combinations. The document reviews how infrastructure provides vital services supporting recovery activities and discusses classifying infrastructure into six types. When disaster reduces infrastructure services, a gap forms between community demands and available support, hindering recovery. The proposed research aims to identify this gap and optimize capacity selection to fill it cost-effectively.
Effect of lintel and lintel band on the global performance of reinforced conc...eSAT Publishing House
This document analyzes the effect of lintels and lintel bands on the seismic performance of reinforced concrete masonry infilled frames through non-linear static pushover analysis. Four frame models are considered: a frame with a full masonry infill wall; a frame with a central opening but no lintel/band; a frame with a lintel above the opening; and a frame with a lintel band above the opening. The results show that the full infill wall model has 27% higher stiffness and 32% higher strength than the model with just an opening. Models with lintels or lintel bands have slightly higher strength and stiffness than the model with just an opening. The document concludes lintels and lintel
Wind damage to trees in the gitam university campus at visakhapatnam by cyclo...eSAT Publishing House
1) A cyclone with wind speeds of 175-200 kph caused massive damage to the green cover of Gitam University campus in Visakhapatnam, India. Thousands of trees were uprooted or damaged.
2) A study assessed different types of damage to trees from the cyclone, including defoliation, salt spray damage, damage to stems/branches, and uprooting. Certain tree species were more vulnerable than others.
3) The results of the study can help in selecting more wind-resistant tree species for future planting and reducing damage from future storms.
Wind damage to buildings, infrastrucuture and landscape elements along the be...eSAT Publishing House
1) A visual study was conducted to assess wind damage from Cyclone Hudhud along the 27km Visakha-Bheemli Beach road in Visakhapatnam, India.
2) Residential and commercial buildings suffered extensive roof damage, while glass facades on hotels and restaurants were shattered. Infrastructure like electricity poles and bus shelters were destroyed.
3) Landscape elements faced damage, including collapsed trees that damaged pavements, and debris in parks. The cyclone wiped out over half the city's green cover and caused beach erosion around protected areas.
1) The document reviews factors that influence the shear strength of reinforced concrete deep beams, including compressive strength of concrete, percentage of tension reinforcement, vertical and horizontal web reinforcement, aggregate interlock, shear span-to-depth ratio, loading distribution, side cover, and beam depth.
2) It finds that compressive strength of concrete, tension reinforcement percentage, and web reinforcement all increase shear strength, while shear strength decreases as shear span-to-depth ratio increases.
3) The distribution and amount of vertical and horizontal web reinforcement also affects shear strength, but closely spaced stirrups do not necessarily enhance capacity or performance.
Role of voluntary teams of professional engineers in dissater management – ex...eSAT Publishing House
1) A team of 17 professional engineers from various disciplines called the "Griha Seva" team volunteered after the 2001 Gujarat earthquake to provide technical assistance.
2) The team conducted site visits, assessments, testing and recommended retrofitting strategies for damaged structures in Bhuj and Ahmedabad. They were able to fully assess and retrofit 20 buildings in Ahmedabad.
3) Factors observed that exacerbated the earthquake's impacts included unplanned construction, non-engineered buildings, improper prior retrofitting, and defective materials and workmanship. The professional engineers' technical expertise was crucial for effective post-disaster management.
This document discusses risk analysis and environmental hazard management. It begins by defining risk, hazard, and toxicity. It then outlines the steps involved in hazard identification, including HAZID, HAZOP, and HAZAN. The document presents a case study of a hypothetical gas collecting station, identifying potential accidents and hazards. It discusses quantitative and qualitative approaches to risk analysis, including calculating a fire and explosion index. The document concludes by discussing hazard management strategies like preventative measures, control measures, fire protection, relief operations, and the importance of training personnel on safety.
Review study on performance of seismically tested repaired shear wallseSAT Publishing House
This document summarizes research on the performance of reinforced concrete shear walls that have been repaired after damage. It begins with an introduction to shear walls and their failure modes. The literature review then discusses the behavior of original shear walls as well as different repair techniques tested by other researchers, including conventional repair with new concrete, jacketing with steel plates or concrete, and use of fiber reinforced polymers. The document focuses on evaluating the strength retention of shear walls after being repaired with various methods.
Monitoring and assessment of air quality with reference to dust particles (pm...eSAT Publishing House
This document summarizes a study on monitoring and assessing air quality with respect to dust particles (PM10 and PM2.5) in the urban environment of Visakhapatnam, India. Sampling was conducted in residential, commercial, and industrial areas from October 2013 to August 2014. The average PM2.5 and PM10 concentrations were within limits in residential areas but moderate to high in commercial and industrial areas. Exceedance factor levels indicated moderate pollution for residential areas and moderate to high pollution for commercial and industrial areas. There is a need for management measures like improved public transport and green spaces to combat particulate air pollution in the study areas.
Low cost wireless sensor networks and smartphone applications for disaster ma...eSAT Publishing House
This document describes a low-cost wireless sensor network and smartphone application system for disaster management. The system uses an Arduino-based wireless sensor network comprising nodes with various sensors to monitor the environment. The sensor data is transmitted to a central gateway and then to the cloud for analysis. A smartphone app connected to the cloud can detect disasters from the sensor data and send real-time alerts to users to help with early evacuation. The system aims to provide low-cost localized disaster detection and warnings to improve safety.
Coastal zones – seismic vulnerability an analysis from east coast of indiaeSAT Publishing House
This document summarizes an analysis of seismic vulnerability along the east coast of India. It discusses the geotectonic setting of the region as a passive continental margin and reports some moderate seismic activity from offshore in recent decades. While seismic stability cannot be assumed given events like the 2004 tsunami, no major earthquakes have been recorded along this coast historically. The document calls for further study of active faults, neotectonics, and implementation of improved seismic building codes to mitigate vulnerability.
Can fracture mechanics predict damage due disaster of structureseSAT Publishing House
This document discusses how fracture mechanics can be used to better predict damage and failure of structures. It notes that current design codes are based on small-scale laboratory tests and do not account for size effects, which can lead to more brittle failures in larger structures. The document outlines how fracture mechanics considers factors like size effect, ductility, and minimum reinforcement that influence the strength and failure behavior of structures. It provides examples of how fracture mechanics has been applied to problems like evaluating shear strength in deep beams and investigating a failure of an oil platform structure. The document argues that fracture mechanics provides a more scientific basis for structural design compared to existing empirical code provisions.
This document discusses the assessment of seismic susceptibility of reinforced concrete (RC) buildings. It begins with an introduction to earthquakes and the importance of vulnerability assessment in mitigating earthquake risks and losses. It then describes modeling the nonlinear behavior of RC building elements and performing pushover analysis to evaluate building performance. The document outlines modeling RC frames and developing moment-curvature relationships. It also summarizes the results of pushover analyses on sample 2D and 3D RC frames with and without shear walls. The conclusions emphasize that pushover analysis effectively assesses building properties but has limitations, and that capacity spectrum method provides appropriate results for evaluating building response and retrofitting impact.
A geophysical insight of earthquake occurred on 21 st may 2014 off paradip, b...eSAT Publishing House
1) A 6.0 magnitude earthquake occurred off the coast of Paradip, Odisha in the Bay of Bengal on May 21, 2014 at a depth of around 40 km.
2) Analysis of magnetic and bathymetric data from the area revealed the presence of major lineaments in NW-SE and NE-SW directions that may be responsible for seismic activity through stress release.
3) Movements along growth faults at the margins of large Bengal channels, due to large sediment loads, could also contribute to seismic events by triggering movements along the faults.
Effect of hudhud cyclone on the development of visakhapatnam as smart and gre...eSAT Publishing House
This document discusses the effects of Cyclone Hudhud on the development of Visakhapatnam as a smart and green city through a case study and preliminary surveys. The surveys found that 31% of participants had experienced cyclones, 9% floods, and 59% landslides previously in Visakhapatnam. Awareness of disaster alarming systems increased from 14% before the 2004 tsunami to 85% during Cyclone Hudhud, while awareness of disaster management systems increased from 50% before the tsunami to 94% during Hudhud. The surveys indicate that initiatives after the tsunami improved awareness and preparedness. Developing Visakhapatnam as a smart, green city should consider governance
Digital Twins Computer Networking Paper Presentation.pptxaryanpankaj78
A Digital Twin in computer networking is a virtual representation of a physical network, used to simulate, analyze, and optimize network performance and reliability. It leverages real-time data to enhance network management, predict issues, and improve decision-making processes.
Supermarket Management System Project Report.pdfKamal Acharya
Supermarket management is a stand-alone J2EE using Eclipse Juno program.
This project contains all the necessary required information about maintaining
the supermarket billing system.
The core idea of this project to minimize the paper work and centralize the
data. Here all the communication is taken in secure manner. That is, in this
application the information will be stored in client itself. For further security the
data base is stored in the back-end oracle and so no intruders can access it.
Prediction of Electrical Energy Efficiency Using Information on Consumer's Ac...PriyankaKilaniya
Energy efficiency has been important since the latter part of the last century. The main object of this survey is to determine the energy efficiency knowledge among consumers. Two separate districts in Bangladesh are selected to conduct the survey on households and showrooms about the energy and seller also. The survey uses the data to find some regression equations from which it is easy to predict energy efficiency knowledge. The data is analyzed and calculated based on five important criteria. The initial target was to find some factors that help predict a person's energy efficiency knowledge. From the survey, it is found that the energy efficiency awareness among the people of our country is very low. Relationships between household energy use behaviors are estimated using a unique dataset of about 40 households and 20 showrooms in Bangladesh's Chapainawabganj and Bagerhat districts. Knowledge of energy consumption and energy efficiency technology options is found to be associated with household use of energy conservation practices. Household characteristics also influence household energy use behavior. Younger household cohorts are more likely to adopt energy-efficient technologies and energy conservation practices and place primary importance on energy saving for environmental reasons. Education also influences attitudes toward energy conservation in Bangladesh. Low-education households indicate they primarily save electricity for the environment while high-education households indicate they are motivated by environmental concerns.
Blood finder application project report (1).pdfKamal Acharya
Blood Finder is an emergency time app where a user can search for the blood banks as
well as the registered blood donors around Mumbai. This application also provide an
opportunity for the user of this application to become a registered donor for this user have
to enroll for the donor request from the application itself. If the admin wish to make user
a registered donor, with some of the formalities with the organization it can be done.
Specialization of this application is that the user will not have to register on sign-in for
searching the blood banks and blood donors it can be just done by installing the
application to the mobile.
The purpose of making this application is to save the user’s time for searching blood of
needed blood group during the time of the emergency.
This is an android application developed in Java and XML with the connectivity of
SQLite database. This application will provide most of basic functionality required for an
emergency time application. All the details of Blood banks and Blood donors are stored
in the database i.e. SQLite.
This application allowed the user to get all the information regarding blood banks and
blood donors such as Name, Number, Address, Blood Group, rather than searching it on
the different websites and wasting the precious time. This application is effective and
user friendly.
Tools & Techniques for Commissioning and Maintaining PV Systems W-Animations ...Transcat
Join us for this solutions-based webinar on the tools and techniques for commissioning and maintaining PV Systems. In this session, we'll review the process of building and maintaining a solar array, starting with installation and commissioning, then reviewing operations and maintenance of the system. This course will review insulation resistance testing, I-V curve testing, earth-bond continuity, ground resistance testing, performance tests, visual inspections, ground and arc fault testing procedures, and power quality analysis.
Fluke Solar Application Specialist Will White is presenting on this engaging topic:
Will has worked in the renewable energy industry since 2005, first as an installer for a small east coast solar integrator before adding sales, design, and project management to his skillset. In 2022, Will joined Fluke as a solar application specialist, where he supports their renewable energy testing equipment like IV-curve tracers, electrical meters, and thermal imaging cameras. Experienced in wind power, solar thermal, energy storage, and all scales of PV, Will has primarily focused on residential and small commercial systems. He is passionate about implementing high-quality, code-compliant installation techniques.
Accident detection system project report.pdfKamal Acharya
The Rapid growth of technology and infrastructure has made our lives easier. The
advent of technology has also increased the traffic hazards and the road accidents take place
frequently which causes huge loss of life and property because of the poor emergency facilities.
Many lives could have been saved if emergency service could get accident information and
reach in time. Our project will provide an optimum solution to this draw back. A piezo electric
sensor can be used as a crash or rollover detector of the vehicle during and after a crash. With
signals from a piezo electric sensor, a severe accident can be recognized. According to this
project when a vehicle meets with an accident immediately piezo electric sensor will detect the
signal or if a car rolls over. Then with the help of GSM module and GPS module, the location
will be sent to the emergency contact. Then after conforming the location necessary action will
be taken. If the person meets with a small accident or if there is no serious threat to anyone’s
life, then the alert message can be terminated by the driver by a switch provided in order to
avoid wasting the valuable time of the medical rescue team.
Use PyCharm for remote debugging of WSL on a Windo cf5c162d672e4e58b4dde5d797...shadow0702a
This document serves as a comprehensive step-by-step guide on how to effectively use PyCharm for remote debugging of the Windows Subsystem for Linux (WSL) on a local Windows machine. It meticulously outlines several critical steps in the process, starting with the crucial task of enabling permissions, followed by the installation and configuration of WSL.
The guide then proceeds to explain how to set up the SSH service within the WSL environment, an integral part of the process. Alongside this, it also provides detailed instructions on how to modify the inbound rules of the Windows firewall to facilitate the process, ensuring that there are no connectivity issues that could potentially hinder the debugging process.
The document further emphasizes on the importance of checking the connection between the Windows and WSL environments, providing instructions on how to ensure that the connection is optimal and ready for remote debugging.
It also offers an in-depth guide on how to configure the WSL interpreter and files within the PyCharm environment. This is essential for ensuring that the debugging process is set up correctly and that the program can be run effectively within the WSL terminal.
Additionally, the document provides guidance on how to set up breakpoints for debugging, a fundamental aspect of the debugging process which allows the developer to stop the execution of their code at certain points and inspect their program at those stages.
Finally, the document concludes by providing a link to a reference blog. This blog offers additional information and guidance on configuring the remote Python interpreter in PyCharm, providing the reader with a well-rounded understanding of the process.
Discover the latest insights on Data Driven Maintenance with our comprehensive webinar presentation. Learn about traditional maintenance challenges, the right approach to utilizing data, and the benefits of adopting a Data Driven Maintenance strategy. Explore real-world examples, industry best practices, and innovative solutions like FMECA and the D3M model. This presentation, led by expert Jules Oudmans, is essential for asset owners looking to optimize their maintenance processes and leverage digital technologies for improved efficiency and performance. Download now to stay ahead in the evolving maintenance landscape.
Stegnography of high embedding efficiency by using an extended matrix encoding algorithm
1. IJRET: International Journal of Research in Engineering and Technology ISSN: 2319-1163
__________________________________________________________________________________________
Volume: 02 Issue: 05 | May-2013, Available @ http://www.ijret.org 858
STEGNOGRAPHY OF HIGH EMBEDDING EFFICIENCY BY USING AN
EXTENDED MATRIX ENCODING ALGORITHM
Borkar Bharat Sampatrao1
, Patil Pritesh Kashinath2
1
Assistant Professor, 2
Student, Department of IT, AVCOE, Sangamner, Maharashtra, India
borkar.bharat@gmail.com, mr.pkpatil1989@gmail.com
Abstract
F5 Steganography is way totally different from most of LSB replacement or matching steganographic schemes, as a result of matrix
encryption is used to extend embedding potency while reducing the amount of necessary changes. By victimisation this theme, the
hidden message inserted into carrier media observably is transferred via a safer imperceptible channel. The embedding domain is that
the quantitative DCT coefficients of JPEG image, which makes the theme, be proof against visual attack and statistical attack from the
steganalyst. Based on this effective theme, An extended matrix encoding algorithm is planned to improve the performance further in
this paper. The embedding potency and embedding rate get accrued to large extent by changing the hash function in matrix encryption
and changing the coding mode. Eventually, the experimental results demonstrate the extended algorithm is more advanced and
efficient to the classic F5 Steganography.
Index Terms: Steganography, LSB replacement, DCT coefficient, Hash function.
-----------------------------------------------------------------------***-----------------------------------------------------------------------
1. INTRODUCTION
Steganography is the art and science of writing the secret
content inside cover media and transferring the stego media
from the sender to intended recipient through a subliminal
channel without arousing the suspicion of adversary. The
presence of hidden info is meant to be undetectable. If the
actual fact that communication is happening is revealed, the
steganography is cracked not withstanding whether or not or
not the hidden info is exposed. Thus, compared with other
connected techniques like watermarking, the property of
covertness plays a crucial role within the stegosystem.
In order to create stegotext apparently innocent, the
confidential message is typically embedded into the redundant
components of cover media. For digital image, the least
significant bit plane in spacial domain is one reasonably these
components that appear as if completely random and noisy.
The modification of LSB won't cause noticeable change of the
looks of image. Several LSB based techniques of data hiding
are proposed in recent years [2,3]. Derek Upham’sJSteg was
most likely the primary in public accessible steganographic
system for JPEG images [4]. This technique is actually a copy
of the LSB substitution algorithm in spacial domain. The
least-significant bit of DCT coefficients is consecutive
replaced with the secret message. Since the replacement solely
happens on 2 adjacent coefficients, it'll cause a statistically
obvious POVs (pairs of values) problem which may be with
success detected by X2
-test proposed by Westfeld and P
fitzmann [5].
2. BACKGROUND
To improve the encoding step by scattering the embedding
locations over the complete DCT-domain in keeping with a
pseudo-random number generator, the Outguess0.1 is
developed [6]. The randomly distributed data cannot detected
by the X2
-test for JSteg successfully. The creator of
Outguess0.1 releases a revised version soon. The new
algorithm known as Outguess0.2 tries to make sure that the
statistics properties of the cover image may be maintained
after encoding [7]. Shortly, Fridrich et al. use the discontinuity
of the border between 2 adjacent eight by eight blocks to
accurately estimate the length of the hidden information
embedded with Outguess0.2 [8]. As another to the
Outguess0.2 algorithm, Westfeld invent the more secure
algorithm called F3 algorithm[9]. This method compares the
absolute value of the coefficients with the secret message. If
the absolute value is same, then no modification is done.
Otherwise, The corresponding DCT coefficient’s absolute
value is reduced by one. F3 steganography eliminates the
POVs problem which exists in JSteg by using above
embedding strategy. In F3 the distribution of histogram looks
unnatural because in F3 there are more even coefficients are
introduced into histrogram than odd coefficients. Hence, to
deal with unstructured frequency distribution of histogram an
improvement called F4 is developed. In F4, even-negative
coefficients and odd-positive coefficients represent one and
odd-negative coefficients and even-positive coefficients stands
for zero. If the secret bit value is same as the symbolic
coefficient, then no changes happen. Otherwise, absolute
value of the corresponding DCT coefficient is reduced by one.
With the assistance of this embedding strategy, the histogram
2. IJRET: International Journal of Research in Engineering and Technology ISSN: 2319-1163
__________________________________________________________________________________________
Volume: 02 Issue: 05 | May-2013, Available @ http://www.ijret.org 859
of stego image looks statistically like a clean image.
Additionally, there are still several steganographic algorithms
proposed like model-based [10,11], Perturbed quantization
[12], YASS [13] so on [14–18].
Nonetheless, Many steganographic algorithms consider the
embedding capacity but but neglect the embedding efficiency
and security. A number of them simply work oppositely.
During this paper, by improving the matrix encoding of F5
stegosystem to increase the stegosystem we propose an
extended algorithm, while the embedding rate is essentially
improved likewise. Some changes in hash function in matrix
encoding are performed to cover additional secret bits into
definite range of cover bits. By exploiting the n-layer
extension, we convert the triple (dmax , n, k) to quad
(dmax , n, k, L) to extend the embedding efficiency and
embedding rate at same time. To indicate current number of
layer, the symbol positions are deployed in every embedding
process. This mechanism makes the receiver understand blind
detection effectively. the rest of this paper is organized as
follows. In Section two, the quality matrix encoding is
represented. In the next section we represent the details of the
proposed extended algorithm. In Section four, some
simulation results and analysis are given, and final conclusion
is provided in Section five.
3. EXISTING SYSTEM
3.1 Matrix Encoding Basics
The F5 scheme is developed with the help of F3 and F4 which
are developed by JSteg, The F5 scheme is developed by
Westfeld [9]. We insert the secret message by modifying
corresponding LSB positions of quantized DCT coefficients of
image, JPEG image as well. Instead of classic LSB
replacement or matching methods, the matrix encoding in F5
are used to implement the insertion and detection of secrete
message. To enhance the security of stegosystem, the
additional ‘permutative straddling’ technique is used to scatter
the secret message over the whole carrier media.
Matrix encoding, as a lower rate method improves the
embedding efficiency to a great extent than the classic LSB
modification methods. It is introduced by Crandall [19]. The
embedding efficiency depends on, how many bits of secret
message can be loaded by one change taken place in carrier
media. Reference from the LSB method, assume that the
secret message and the values at the positions to be changed
subject to the 0–1 uniform distribution. The positions which
are to be embedded modified with a probability of 0.5 on an
average. Means we have an embedding efficiency of 2 bits per
change. If we want to fully embed the whole cover image,
50% of the carrier data will be changed. And it is so usefull in
a high possibility of statistical stegnalysis. By reducing the
density of changes in the cover image we reduce the
possibility of detection, this is the most obvious and well
known way as ever. Matrix encoding decreases the necessary
number of changes (i.e the change density), as well as it
increases the embedding efficiency. The description of the
encoding process is given below:
1) Implement the JPEG lossy compression on the
carrier image and obtain the quantized DCT
coefficients for embedding [20].
2) The LSB plane of quantized DCT coefficients is
partitioned into many embedding cells which are
in the form of vector 𝑎 = 𝑎1 𝑎2. . . 𝑎 𝑛 with the
length of n.
The coding which implemented on each embedding cell is
denoted by an ordered triple (dmax , n, k) . where, n is the
number of modifiable bit positions in an embedding cell,
namely, the length n. k is the bit length of secret message w =
w1w2. . . wk to be embedded into one cell. Also, an
embedding cell with n positions will be changed in not more
than dmax positions to embed k bits secret message. F5
implements matrix encoding only for dmax = 1 . A hash
function f is defined as Formula (1) to map n bits cover data a
into k bits binary string.
f: f a =
n
⊕
i = 1
ai. i, (1)
where ai denotes the i-th position of cell a. i is the
corresponding index number of the bit position and is in
binary form during the operation. The bit length of binary i is
selected as the same size as secret message w. Subsequently,
implement XOR operation on the value of hash function and
secret message w to obtain a decimal number y.
y = w ⊕ f(a) (2)
By logically flipping the y-th bit position of cell a , an
embedded stego data a′ is generated. In special case, the
carrier cell a′ will be left intact wheny = 0.
a′
=
a if y = 0
a1a1 … ay … an otherwise
(3)
Where, ay is the negation ofay.
In extraction phase, the receiver would retrieve the secret
message w by directly putting the stego cell a′
into the same
hash function f. The detailed procedure of matrix encoding is
given in Fig. 1.
For an embedding cell, the change density D refers to the
proportion of altered bit position. Neglecting shrinkage, we
can calculate the change density depending on Formula (4):
D =
n
n+1
.1+
1
n+1
.0
n
=
1
n+1
(4)
3. IJRET: International Journal of Research in Engineering and Technology ISSN: 2319-1163
__________________________________________________________________________________________
Volume: 02 Issue: 05 | May-2013, Available @ http://www.ijret.org 860
We can also get another important performance factor of
steganographic algorithm called the embedding rate R:
R =
k
n
(5)
Using the change density and embedding rate, we can
calculate the embedding efficiency E which indicates the
average bit number of embedded secret message per change:
E =
R
D
=
n+1
n
. k (6)
Fig -1: The block diagram of matrix encoding.
The theoretic values of the change density are given in Table
1, the embedding rate and embedding efficiency in according
to various of (n,k) pairs. The values are calculated by using
Formulas (4)–(6).
In F5 algorithm, the value of n and k must satisfy the equation
n = 2k
− 1. Because k-bit binary sequence has 2k
possible
states in all. It is inevitable that the binary carrier cell has the
length of 2k
− 1 to show all states of k bits secret message by
means of modifying only 1 bit or keeping unchanged. That is
to say, the length of embedding cell increases with the
increase of the length of embedded secret message in manner
of exponential growth. Meanwhile, the modification of carrier
cell maintains 1-bit-change. Thus, the embedding rate will
getting less and less with increase of efficiency. The value of
higher efficiency is the lower rate. This regulation not only
reveals how matrix encoding works, but also naturally inspires
us that we are able to improve both the embedding efficiency
and the embedding rate at the same time if k increases in the
context of fixing on n and the number of change.
4. PROPOSED SYSTEM
4.1 Extended Encoding Algorithm
How to do the independent increase of k? In matrix encoding,
the hash function maps n bits carrier data into a certain length
of binary sequence that depends upon the bit length of index i.
And the bit length of i is chosen as the same size as that of
secret message w. Given the extension of the length of i is
realizable, we will finally embed additional bits of secret
message into one cell. Taking the cell (1,3,2) as associate
example, 2-bit secret message will be embedded into 3-
bitembedding cell by changing only 1-bit position of the cell.
The length of i is two. If the length of i is extended to be 3-bit,
we can take one additional bit of secret message to implement
exclusive-or with the binary result of hash function. However,
the problem shows up. The result of XOR operation, namely,
the index y which will be used to indicate the position to be
modified is out of the range. We may get y = (101)2 = 5 in
that case, however it’s not possible to find out the fifth bit
position for a cell of 3-bit length. Essentially modifying just 1
bit or keeping unchanged in the carrier cell with 3-bit length
can only express four kinds of secret code. These type of code
are named as ‘00’, ‘01’, ‘10’, ‘11’ with the length of log2 4.
The secret code extended to 3 bits has 23
states in all. Thus,
there is no way to embed all of eight kinds of code into 3-bit
cell by using matrix encoding algorithm.
Actually, there is indeed a way to extend but require to select
some extended codes elaborately. The extension appears to be
conditional. Since 3-bit modifiable cell is only able to express
four states, we still have a half opportunity to extend by
selecting four extended codes to embed from eight codes.
During calculating the result of hash function, we can simply
multiply the index i by 2 to extend 1 bit where we call it 1-
layer extension. In this case, the codes ‘00’, ‘01’, ‘10’, ‘11’
are extended to ‘000’, ‘010’, ‘100’, ‘110’. In a similar way, we
can multiply i by 22
to extend 2 bits called 2-layer extension.
The rest may be deduced by analogy. L-Layer extension is
performed by multiplying iby 2L
. Due to the closure property
of XOR operation,
Table -1:The performance of matrix encoding.
(𝐧, 𝐤) 𝐃(%) 𝐑(%) 𝐄(bit)
(1,1) 50 100 2
(3,2) 25 66.67 2.67
(7,3) 12.5 42.86 3.43
4. IJRET: International Journal of Research in Engineering and Technology ISSN: 2319-1163
__________________________________________________________________________________________
Volume: 02 Issue: 05 | May-2013, Available @ http://www.ijret.org 861
(15,4) 6.25 26.67 4.27
(31,5) 3.13 16.13 5.16
(63,6) 1.56 9.52 6.09
(127,7) 0.78 5.51 7.06
(255,8) 0.39 3.14 8.03
(511,9) 0.2 1.76 9.02
we can embed the secret message with more than 2-bit length
into 3-bit cell, provided that the secret code is equal to any one
of the specific extended codes. The mode of extension is
illustrated in Fig. 2.
For extended algorithm, the coding mode implemented on the
embedding cell is redefined by a quad (dmax , n, k, L) ,
wherethe new parameter L denotes the maximum of extension
layer. Firstly, take out (k + L) -bit secret code w =
w1w2 . . . wk … wk+Lfrom the whole secret message sequence
to test if the secret code matches a specific extended code in
the L -th layer. Thematching method is to test whether
mod w, 2L
= 0 is true. If the remainder equals to zero, the
extension layer of currentcell lcrt is L and a (k + L)-bit secret
data will be able to be embedded successfully. If not, then
continue to test if the prior (k + L − 1) -bit secret code
w = w1w2 . . . wk … wk+L−1 matches a specific extended
code in the (L − 1) -th layer by testing the resultof
mod w, 2L−1
. If mod w, 2L−1
= 0 is true, then the current
extension layer is lcrt = L − 1 and the secret code w =
w1w2 . . . wk … wk+L−1 will be embedded into this cell. But if
not, continue to do this kind of test until we find out a
matching code in a certain layer or there is no matching code
in all extension layers. In latter case, the extended algorithm
rolls back to the standard matrix encoding. The final
embeddable secret code is in the form of
w = w1w2 . . . wk … wk+lcrt
. If no extension takes place, the
layer of current cell is lcrt = 0.
In extended algorithm, the hash function is updated as
Formula (7):
f: f a =
n
⊕
i = 1
ai. (i. 2lcrt ) (7)
Subsequently, implement XOR operation on the result of hash
function and secret message w = w1w2 . . . wk … wk+lcrt
to
obtain a decimal number y. At the moment, the range of index
y has already been extended. We must shrink it to n by
makingy divided by a coefficient 2lcrt .
y =
w⨁f(a)
2lcrt
, (8)
where the result of w⨁f(a) is expressed as a decimal number.
Eventually, we obtain a stego cell a′ by negating the y-th
position in carrier cell a.
a′
=
a
a1a1 … ay … an
ify =0
otherwise
(9)
From the above statement, it is implied that the introduction of
extension mechanism raises a new problem to the receiver in
detection process. The coding quad (dmax , n, k, L) can be
confirmed and shared by the sender and the receiver before the
start of the communication. Since the current layer lcrt is
relative to the content of secret message, the receiver cannot
predict this parameter definitely. Accordingly, the sender has
to transfer lcrt to the receiver in the embedding process. We
decide to append a symbol s = s1s2. . . sm to the stego cell
a′
= a1a2 … ay … an to mark the layer lcrt . Because the value
of lcrt is fallen into the closed interval of [0,L], the length of
symbol m can be calculated by Formula (10). We use the
binary number of lcrt to assign the symbol s.
m = [log2(L + 1)] (10)
Thus, the new stego cell c with the length of (n + m) is
composed of two parts, namely, data part and symbol part (i.e.
thecell is reformed as c = a′
s = a1a2 … ay … ans1s2 … sm ). In
extraction phase, the receiver firstly take out the symbol part
of thestego cell c and calculate the layerlcrt .
lcrt = dec(s1s2. . . sm ) (11)
Fig -2: The chart of the extension mode.
Eventually, the extended secret data
w = w1w2 . . . wk … wk+lcrt
is retrieved by putting the data
part of the stego cell c intothe updated hash function f: f a =
n
⊕
i = 1
ai. (i. 2lcrt )
w = f(a′) (12)
5. IJRET: International Journal of Research in Engineering and Technology ISSN: 2319-1163
__________________________________________________________________________________________
Volume: 02 Issue: 05 | May-2013, Available @ http://www.ijret.org 862
The detailed procedure of the extended matrix encoding is
shown in Fig. 3.
To be more clear, we take the coding mode of (1,7,3,2) as an
example to show how the extended algorithm works. Assume
that the carrier data is a = 1101010, the secret data taken
from the whole secret sequence is w = 11001.
First of all, the sender tests if mod dec 11001 , 22
= 0 is
true. Due to mod dec 11001 , 22
= 1, the sender continue
to testthe shorter secret data w = 1100 . Since
mod dec 1100 , 21
= 0 is true, it is confirmed that the
secret data w = 1100 can be embedded into the carrier data
and the current layer lcrt = 1.
Secondly, calculate the length of symbol m = log23 = 2
and assigns the symbol s = 01.
Thirdly, calculate the hash function with the carrier data
a = 1101010 as shown as follows:
Fig -3: The flowchart of the extended matrix encoding.
0011
0100
1000 < ⋯ ai. (i. 21
)
f a :⊕ 1100
______
0011
(13)
Finally, calculate index
y = (w ⊕ f(a)) 2 = ((1100)2 ⊕ (0010)2) 2 = 7 and flip
the seventh bit position of carrier data a = 1101010 to
generate a stego data a′ = 1101011. Up to now, a stego cell
c = 110101101 is obtained.
In detection process, the receiver firstly takes the symbol
s = 01 from the stego cell and calculates the current layer
lcrt = dec(01) = 1 . Subsequently, calculate hash function
with the stego data a′ = 1101011 to retrieve the secret
dataw = 1100 as follows:
0010
0100
1000
1100
f a′ :⊕ 1110
______
1100
(14)
5. ARCHITECTURE
5.1 Simulation and Analysis
In order to check the performance of the proposed algorithm,
we tend to use a meaningful binary logo image with 64 by
64pixels as the secret message and hide it into the carrier
image Lena with 256 by 256 pixels shown in Fig. 4(a and b).
The corresponding stego image is shown in Fig. 4(c).
The PSNR of original carrier image and stego image is 70.33.
It is indicated that the extended method has a strong
covertness. Fig. 5 illustrates the difference between the
embedding efficiency of standard matrix encoding and that of
extended algorithm with three different maximal extension
layer (respectively L = 7,15,31).
From the figure it can be seen that the embedding efficiency of
the extended algorithm is greater than that of matrix encoding
when the ratio of k to n is larger. However, when the ratio is
less than 3/7, the efficiency of extended algorithm with L = 7
starts to be lower than that of standard algorithm. When the
ratio is less than 4/15 and 5/31, the efficiency of extended
algorithm with L = 15 and 31 starts to be lower than that of
standard algorithm, respectively. Although the extended
algorithm increases the bit number of embedded secret
message, the modification for setting symbol positions affect
the embedding efficiency because these changes do not load
any secret message. Actually, the bit number of average
changes for setting m-bit symbol is m/2 bits.
For every embedding cell, there are average m/2 bits
modification used to set symbol rather than loading secret
message. With the decrease of the ratio of kto n, the length of
carrier data n becomes large which leads to the significant
reduction ofthe number of embedding cells generated by
partitioning the LSB plane of quantized DCT coefficients. In
6. IJRET: International Journal of Research in Engineering and Technology ISSN: 2319-1163
__________________________________________________________________________________________
Volume: 02 Issue: 05 | May-2013, Available @ http://www.ijret.org 863
other words, the proportion of altered bits in carrier data is
very small. In this case, the influence of symbol modification
emerges and the cost of changing without loading effective
information cannot be neglected. Oppositely, when the ratio of
k to n is large, moresecret data can be embedded and a large
number of changes exist inside the cover data. The influence
of symbol modificationis relatively little. Although we can
improve the embedding efficiency by increasing the maximal
extension layer L, the increase of L will result in the increase
of length of symbol which has a negative impact on the
efficiency. Thus, we should findout the optimal tradeoff to
confirm the coding mode (1, n, k, L) according to the size and
type of secret message. Taking the coding mode (1,3,2, L) for
example, a new logo image illustrated in Fig. 6(a) will be
embedded into the test image Lena using the extended
scheme. It can be seen from Fig. 6(b) that the values of
embedding efficiency vary in according to the maximumof
extension layer L when n and k are specific. In this case, the
maximum of embedding efficiency is obtained when L =
15.When L is less than 15, the secret bits are not extended
adequately. Contrarily, a larger L leads to the increase of the
length of symbol which has a negative impact on the
embedding efficiency. Obviously, the coding mode (1,3,2,15)
is the best choice in this instance.
Fig -4: The performance of the extended algorithm
In matrix encoding, the length of secret message k is always
smaller than the length of carrier cell n . It means the
embedding rate is smaller than 100% for ever. However, it is
possible for the extended algorithm that the length of secret
message (k + lcrt ) is larger than the length of carrier cell
(n + m) due to the extension. Theoretically, the embedding
rate can exceed 100% and the maximum value can be close to
Rmax .
Rmax =
Ls
n+[log 2(Ls −k+1)]
,
(15)
Fig -5: The comparison of the embedding efficiency.
7. IJRET: International Journal of Research in Engineering and Technology ISSN: 2319-1163
__________________________________________________________________________________________
Volume: 02 Issue: 05 | May-2013, Available @ http://www.ijret.org 864
Fig -6: The relationship between L and E when n and k are
specific.
Fig -7: The comparison of the embedding rate.
Where Ls denotes the length of the whole secret message. Fig.
7 shows the comparison between the embedding rate of
standard matrix encoding and that of extended algorithm with
three different maximal extension layer (respectively L =
7,15,31).
CONCLUSION
Many steganographic algorithms provide a high capability for
hidden messages however neglect the embedding efficiency.
In fact, fewer modifications might guarantee the high
covertness of the stegosystem. F5 steganography employs
matrix encoding to get higher efficiency than classic LSB
replacement or matching methods. However, with the increase
of the efficiency, the embedding rate is decreased rapidly. In
high efficiency case, a small amount of secret message can be
embedded that leads to a waste of carrier data. The extended
matrix encoding algorithm proposed in this paper uses n-layer
extension to enhance the embedding efficiency and rate
simultaneously. In the application of high ratio of kto n, the
extended algorithm can obtain a better efficiency. Moreover,
the extended algorithm makes the embedding rate exceed
100% theoretically and realizes a high capacity whilst
performing a high efficiency.
From some experimental results, e.g. in Fig. 5, it can be seen
that the embedding efficiency of improved scheme is not
always higher than F5 all the time. When the ratio of k to n
becomes small, the embedding efficiency gets decreased due
to the introduction of symbol bits. The reason of this
phenomenon is mainly that the changes for setting symbols do
not load any secret message and a random-type secret message
cannot always be extended as well (actually with the
possibility of 0.5). For the new method, this problem seems
inevitable. In spite of this, our scheme is still important and
meaningful because we also use binary images which are full
of consecutive black pixels ‘00000000. . .’ like the logos in
Figs. 4 and 6. This kind of secret message has more
opportunities to be extended with a large number of extension
layers. The experimental results demonstrate that the extended
algorithm has a high covertness, a high embedding efficiency
and a high embedding rate in the most cases. And the scheme
is effective enough to be applied to the area of covert
communication.
REFERENCES:
[01]. Simmons GJ. The prisoners’ problem and the subliminal
channel. In: Advances in cryptology: proceedings of crypto
83, New York; 1984. p. 51–67.
[02].vanSchyndel RG, Tirkel A, Osborne CF. A digital
watermark. In: Proc. of int. conf. on image processing, Austin;
November 1994. p. 86–9.
[03]. Franz E, Jerichow A, Moller S, Pfitzmann A, Stierand I.
Computer based steganography: how it works and why
therefore any restrictions oncryptography are nonsense, at
best. In: Proc. of the 1st international workshop on
information hiding, Cambridge; May 1996. p. 7–21.
[04]. Zhang Tao, Ping Xijian. A fast and effective steganalytic
technique against JSteg-like algorithms. In: Proc. 8th ACM
symp. on applied computing, Florida;March 2003. p. 307–11.
[05].Westfeld A, Pfitzmann A. Attacks on steganographic
systems. Lect Notes ComputSci 2000;1768:61–75.
[06].Provos N. Defending against statistical steganalysis. In:
Proc. of the 10th USENIX security symposium, Washington,
DC; August 2001. p. 323–35.
[07].Provos N, Honeyman P. Hide and seek: an introduction to
steganography. IEEE Secur Privacy 2003;1:32–44.
[08].Fridrich J, Goljan M, Hogea D. Attacking the OutGuess.
In: Proc. of the 3rd information hiding workshop on
multimedia and security, Juan-les-Pins; December 2002. p. 3–
6.
[09].Westfeld A. F5-a steganographic algorithm: high capacity
despite better steganalysis. Lect Notes ComputSci
2001;2137:289–302.
8. IJRET: International Journal of Research in Engineering and Technology ISSN: 2319-1163
__________________________________________________________________________________________
Volume: 02 Issue: 05 | May-2013, Available @ http://www.ijret.org 865
[10].Sallee P. Model-based steganography. In: Proc. of the
2nd international workshop on digital watermarking, Seoul;
October 2003. p. 154–67.
[11].Sallee P. Model-based methods for steganography and
steganalysis. Int J Image Graphics 2005;5(1):167–90.
[12] Fridrich J, Goljan M, Soukal D. Perturbed quantization
steganography with wet paper codes. In: Proc. of ACM
multimedia workshop, Magdeburg; September 2004. p. 4–15.
[13] Solanki K, Sarkar A, Manjunath B. YASS: yet another
steganographic scheme that resists blind steganalysis. Lect
Notes ComputSci 2008;4567:16–31.
[14] Wang Xiangyang, Yang Yiping, Yang Hongying.
Invariant image watermarking using multi-scale Harris
detector and wavelet moments. ComputElectrEng
2010;36(1):31–44.
[15] Lu Wei, Lu Hongtao, Chung Fulai. Feature based robust
watermarking using image normalization. ComputElectrEng
2010;36(1):2–18.
[16] Lu Wei, Sun Wei, Lu Hongtao. Robust watermarking
based on DWT and nonnegative matrix factorization.
ComputElectrEng 2009;35(1):183–8.
[17] Al-Otum HA, Al-Taba’a AO. Adaptive color image
watermarking based on a modified improved pixel-wise
masking technique. ComputElectrEng 2009;35(5):673–95.
[18] Fan Li, GaoTiegang, Yang Qunting. A novel
watermarking scheme for copyright protection based on
adaptive joint image feature and visual secret sharing. Inf
Control. 2011;7(7):3679–94.
[20] Crandall R. Some notes on steganography. Posted on
steganography mailing list; 1998. http://os.inf.tu-
dresden.de/~westfeld/crandall.pdf.
[21] Wallace GK. The JPEG still picture compression
standard. IEEE Trans Consum Electron 1992;38(1):xviii–xxiv.
BIOGRAPHIES:
Borkar Bharat Sampatraoreceived his
B.E and M.E degrees in Computer
Science and Engineering from Pune
University, He is a member of ISTE,
ACM, IE. He published 3 National and 2
International papers. He presented 3
papers in national conference and 2
papers in international conference.His research interests
includeImage processing.
Pritesh Kashinath Patil his B.E. in
information Technology from North
Maharashtra University, in 2012. He is
currently doing his M.E. in Information
Technology from University of Pune.