The document presents a new method for securing medical images for mobile health applications. It combines encryption, robust watermarking, and fragile watermarking. For encryption, the image is divided into blocks that are encrypted by XORing with a key block. The key block is updated chaotically. Robust watermarking hides patient data in the image by modifying DCT coefficients using a mixing function with phase-shift keying. Fragile watermarking authenticates the image. The combination provides full protection of confidentiality, authentication, and integrity for medical images in mobile health. Experimental results show the method achieves high security with good performance.
Substitution-diffusion based Image CipherIJNSA Journal
In this paper, a new image encryption scheme using a secret key of 128-bit size is proposed. In the algorithm, image is partitioned into several key based dynamic blocks and further, each block passes through the eight rounds of diffusion as well as substitution process. In diffusion process, sequences of block pixels are rearranged within the block by a zigzag approach whereas block pixels are replaced with another by using difference calculation of row and column in substitution process. Due to high order of substitution and diffusion, common attacks like linear and differential cryptanalysis are infeasible. The experimental results show that the proposed technique is efficient and has high security features.
An Efficient Multiplierless Transform algorithm for Video CodingCSCJournals
This paper presents an efficient algorithm to accelerate software video encoders/decoders by reducing the number of arithmetic operations for Discrete Cosine Transform (DCT). A multiplierless Ramanujan Ordered Number DCT (RDCT) is presented which computes the coefficients using shifts and addition operations only. The reduction in computational complexity has improved the performance of the video codec by almost 58% compared with the commonly used integer DCT. The results show that significant computation reduction can be achieved with negligible average peak signal-to-noise ratio (PSNR) degradation. The average structural similarity index matrix (SSIM) also ensures that the degradation due to the approximation is minimal.
Data Hiding Method With High Embedding Capacity CharacterCSCJournals
Recently, the data hiding method based on the high embedding capacity by using improved EMD method was proposed by Kuo et al.[6]. They claimed that their scheme can not only hide a great deal of secret data but also keep high safety and good image quality. However, in their scheme, the sender and the receiver must share the synchronous random secret seed before they transmit the stego-image each other. Otherwise, they can not recover the correct secret information from the stego-image. In this paper we propose an improved scheme based on EMD and LSB matching method to overcome the above problem, in other words, the sender does not share the synchronous random secret seed the receiver before the stego-image is transmitted. Observing the experimental results, they show that our proposed scheme acquires high embedding capacity and acceptable stego-image quality.
IMPROVED PARALLEL THINNING ALGORITHM TO OBTAIN UNIT-WIDTH SKELETONijma
To extract the creditable features in a fingerprint image, many people use a thinning algorithm that plays a
very important role in preprocessing. In this paper, we propose a robust parallel thinning algorithm that
can preserve the connectivity of the binarized fingerprint image, while making the thinnest skeleton of only
1-pixel wide, which gets extremely close to the medial axis. The proposed thinning method repeats three
sub-iterations. The first sub-iteration takes off only the outermost boundary pixel using the inner points. To
extract the one-sided skeletons, the second sub-iteration seeks the skeletons with a 2-pixel width. The third
sub-iteration prunes the needless pixels with a 2-pixel width existing in the obtained skeletons. The
proposed thinning algorithm shows robustness against rotation and noise and makes the balanced medial
axis. To evaluate the performance of the proposed thinning algorithm, we compare it with and analyze
previous algorithms.
FUZZY IMAGE SEGMENTATION USING VALIDITY INDEXES CORRELATIONijcsit
This paper introduces an algorithm for image segmentation using a clustering technique; the technique is
based on the fuzzy c means algorithm (FCM) that is executed iteratively with different number of clusters.
Furthermore, simultaneously five validity indexes are calculated and their information is correlated to
determine the optimal number of clusters in order to segment an image, results and simulations are shown
in the paper.
An Improved Adaptive Steganographic Method Based on Least Significant Bit Sub...IOSRJVSP
This paper presents a novel technique for improved data embedding in cover images based on least significant bit and pixel-value differencing. The proposed method is based on the properties of human visual system i.e. eyes can tolerate larger changes in edge areas as compared to smooth areas. Therefore, the method utilizes the HVS concept and hides large amount of secret data in edge areas while less amount of data in smooth areas. The results of the proposed method are verified using extensive simulations.
Substitution-diffusion based Image CipherIJNSA Journal
In this paper, a new image encryption scheme using a secret key of 128-bit size is proposed. In the algorithm, image is partitioned into several key based dynamic blocks and further, each block passes through the eight rounds of diffusion as well as substitution process. In diffusion process, sequences of block pixels are rearranged within the block by a zigzag approach whereas block pixels are replaced with another by using difference calculation of row and column in substitution process. Due to high order of substitution and diffusion, common attacks like linear and differential cryptanalysis are infeasible. The experimental results show that the proposed technique is efficient and has high security features.
An Efficient Multiplierless Transform algorithm for Video CodingCSCJournals
This paper presents an efficient algorithm to accelerate software video encoders/decoders by reducing the number of arithmetic operations for Discrete Cosine Transform (DCT). A multiplierless Ramanujan Ordered Number DCT (RDCT) is presented which computes the coefficients using shifts and addition operations only. The reduction in computational complexity has improved the performance of the video codec by almost 58% compared with the commonly used integer DCT. The results show that significant computation reduction can be achieved with negligible average peak signal-to-noise ratio (PSNR) degradation. The average structural similarity index matrix (SSIM) also ensures that the degradation due to the approximation is minimal.
Data Hiding Method With High Embedding Capacity CharacterCSCJournals
Recently, the data hiding method based on the high embedding capacity by using improved EMD method was proposed by Kuo et al.[6]. They claimed that their scheme can not only hide a great deal of secret data but also keep high safety and good image quality. However, in their scheme, the sender and the receiver must share the synchronous random secret seed before they transmit the stego-image each other. Otherwise, they can not recover the correct secret information from the stego-image. In this paper we propose an improved scheme based on EMD and LSB matching method to overcome the above problem, in other words, the sender does not share the synchronous random secret seed the receiver before the stego-image is transmitted. Observing the experimental results, they show that our proposed scheme acquires high embedding capacity and acceptable stego-image quality.
IMPROVED PARALLEL THINNING ALGORITHM TO OBTAIN UNIT-WIDTH SKELETONijma
To extract the creditable features in a fingerprint image, many people use a thinning algorithm that plays a
very important role in preprocessing. In this paper, we propose a robust parallel thinning algorithm that
can preserve the connectivity of the binarized fingerprint image, while making the thinnest skeleton of only
1-pixel wide, which gets extremely close to the medial axis. The proposed thinning method repeats three
sub-iterations. The first sub-iteration takes off only the outermost boundary pixel using the inner points. To
extract the one-sided skeletons, the second sub-iteration seeks the skeletons with a 2-pixel width. The third
sub-iteration prunes the needless pixels with a 2-pixel width existing in the obtained skeletons. The
proposed thinning algorithm shows robustness against rotation and noise and makes the balanced medial
axis. To evaluate the performance of the proposed thinning algorithm, we compare it with and analyze
previous algorithms.
FUZZY IMAGE SEGMENTATION USING VALIDITY INDEXES CORRELATIONijcsit
This paper introduces an algorithm for image segmentation using a clustering technique; the technique is
based on the fuzzy c means algorithm (FCM) that is executed iteratively with different number of clusters.
Furthermore, simultaneously five validity indexes are calculated and their information is correlated to
determine the optimal number of clusters in order to segment an image, results and simulations are shown
in the paper.
An Improved Adaptive Steganographic Method Based on Least Significant Bit Sub...IOSRJVSP
This paper presents a novel technique for improved data embedding in cover images based on least significant bit and pixel-value differencing. The proposed method is based on the properties of human visual system i.e. eyes can tolerate larger changes in edge areas as compared to smooth areas. Therefore, the method utilizes the HVS concept and hides large amount of secret data in edge areas while less amount of data in smooth areas. The results of the proposed method are verified using extensive simulations.
Secure High Capacity Data Hiding in Images using EDBTCijsrd.com
Block truncation coding is an efficient compression technique which has low computational complexity but it has two major issues like blocking and false counter effects .So here we have used error-diffused BTC that improves above deficiencies using visual low pass compensation on the bitmap. In this paper complementary hiding EDBTC is developed to resolve the above issue. In this project a single water mark is embedded and then multiple water marks are embedded .usually we use an adaptive external bias factor to embed the watermark but it damages the image quality and robustness .So here we use an extremely small bias factor to control the watermark embedding and this enables a high capacity scenario without significantly damaging image quality. Until now a few data hiding schemes are proposed but it damages the characteristics of BTC. The security of embedded water mark is high that it can’t be easy extracted by the malicious users. The watermark is encrypted by standard encryption algorithm and then it is embedded.
Performance Analysis of CRT for Image Encryption ijcisjournal
With the fast advancements of information technology, the security of image data transmitted or stored over
internet is become very difficult. To hide the details, an effective method is encryption, so that only
authorized persons can decrypt the image with the keys available. Since the default features of digital
image such as high capacity data, large redundancy and large similarities among pixels, the conventional
encryption algorithms such as AES, , DES, 3DES, and Blow Fish, are not applicable for real time image
encryption. This paper presents the performance of CRT for image encryption to secure storage and
transmission of image over internet.
Copy Move Forgery Detection Using GLCM Based Statistical Features ijcisjournal
The features Gray Level Co-occurrence Matrix (GLCM) are mostly explored in Face Recognition and
CBIR. GLCM technique is explored here for Copy-Move Forgery Detection. GLCMs are extracted from all
the images in the database and statistics such as contrast, correlation, homogeneity and energy are
derived. These statistics form the feature vector. Support Vector Machine (SVM) is trained on all these
features and the authenticity of the image is decided by SVM classifier. The proposed work is evaluated on
CoMoFoD database, on a whole 1200 forged and processed images are tested. The performance analysis
of the present work is evaluated with the recent methods.
Perimetric Complexity of Binary Digital ImagesRSARANYADEVI
Perimetric complexity is a measure of the complexity of binary pictures. It is defined as the sum of inside and outside perimeters of the foreground, squared, divided by the foreground area, divided by . Difficulties arise when this definition is applied to digital images composed of binary pixels. In this article we identify these problems and propose solutions. Perimetric complexity is often used as a measure of visual complexity, in which case it should take into account the limited resolution of the visual system. We propose a measure of visual perimetric complexity that meets this requirement.
An enhanced difference pair mapping steganography method to improve embedding...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Fourth order improved finite difference approach to pure bending analysis o...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
VARIATION-FREE WATERMARKING TECHNIQUE BASED ON SCALE RELATIONSHIPcsandit
Most watermark methods use pixel values or coefficients as the judgment condition to embed or
extract a watermark image. The variation of these values may lead to the inaccurate condition
such that an incorrect judgment has been laid out. To avoid this problem, we design a stable
judgment mechanism, in which the outcome will not be seriously influenced by the variation.
The principle of judgment depends on the scale relationship of two pixels. From the observation
of common signal processing operations, we can find that the pixel value of processed image
usually keeps stable unless an image has been manipulated by cropping attack or halftone
transformation. This can greatly help reduce the modification strength from image processing
operations. Experiment results show that the proposed method can resist various attacks and
keep the image quality friendly.
Background Estimation Using Principal Component Analysis Based on Limited Mem...IJECEIAES
Given a video of 푀 frames of size ℎ × 푤. Background components of a video are the elements matrix which relative constant over 푀 frames. In PCA (principal component analysis) method these elements are referred as “principal components”. In video processing, background subtraction means excision of background component from the video. PCA method is used to get the background component. This method transforms 3 dimensions video (ℎ × 푤 × 푀) into 2 dimensions one (푁 × 푀), where 푁 is a linear array of size ℎ × 푤 . The principal components are the dominant eigenvectors which are the basis of an eigenspace. The limited memory block Krylov subspace optimization then is proposed to improve performance the computation. Background estimation is obtained as the projection each input image (the first frame at each sequence image) onto space expanded principal component. The procedure was run for the standard dataset namely SBI (Scene Background Initialization) dataset consisting of 8 videos with interval resolution [146 150, 352 240], total frame [258,500]. The performances are shown with 8 metrics, especially (in average for 8 videos) percentage of error pixels (0.24%), the percentage of clustered error pixels (0.21%), multiscale structural similarity index (0.88 form maximum 1), and running time (61.68 seconds).
BIG DATA-DRIVEN FAST REDUCING THE VISUAL BLOCK ARTIFACTS OF DCT COMPRESSED IM...IJDKP
The Urban Surveillance Systems generate huge amount of video and image data and impose high pressure
onto the recording disks. It is obvious that the research of video is a key point of big data research areas.
Since videos are composed of images, the degree and efficiency of image compression are of great
importance. Although the DCT based JPEG standard are widely used, it encounters insurmountable
problems. For instance, image encoding deficiencies such as block artifacts have to be removed frequently.
In this paper, we propose a new, simple but effective method to fast reduce the visual block artifacts of DCT
compressed images for urban surveillance systems. The simulation results demonstrate that our proposed
method achieves better quality than widely used filters while consuming much less computer CPU
resources.
Blind Image Seperation Using Forward Difference Method (FDM)sipij
In this paper, blind image separation is performed, exploiting the property of sparseness to represent images. A new sparse representation called forward difference method is proposed. It is known that most of the independent component analysis (ICA) basis functions, extracted from images are sparse and gives unreliable sparseness measure. In the proposed method, the image mixture is first transformed to sparse images. These images are divided into blocks and for each block the sparseness measure ε0 norm is applied. The block having the most sparseness is considered to determine the separation matrix. The efficiency of the proposed method is compared with other sparse representation functions.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
A STUDY OF METHODS FOR TRAINING WITH DIFFERENT DATASETS IN IMAGE CLASSIFICATIONADEIJ Journal
This research developed a training method of Convolutional Neural Network model with multiple datasets to achieve good performance on both datasets. Two different methods of training with two characteristically different datasets with identical categories, one with very clean images and one with real-world data, were proposed and studied. The model used for the study was a neural network derived from ResNet. Mixed training was shown to produce the best accuracies for each dataset when the dataset is mixed into the training set at the highest proportion, and the best combined performance when the realworld dataset was mixed in at a ratio of around 70%. This ratio produced a top-1 combined performance of 63.8% (no mixing produced 30.8%) and a top-3 combined performance of 83.0% (no mixing produced 55.3%). This research also showed that iterative training has a worse combined performance than mixed training due to the issue of fast forgetting.
Wavelet Transform based Medical Image Fusion With different fusion methodsIJERA Editor
This paper proposes wavelet transform based image fusion algorithm, after studying the principles and characteristics of the discrete wavelet transform. Medical image fusion used to derive useful information from multimodality medical images. The idea is to improve the image content by fusing images like computer tomography (CT) and magnetic resonance imaging (MRI) images, so as to provide more information to the doctor and clinical treatment planning system. This paper based on the wavelet transformation to fused the medical images. The wavelet based fusion algorithms used on medical images CT and MRI, This involve the fusion with MIN , MAX, MEAN method. Also the result is obtained. With more available multimodality medical images in clinical applications, the idea of combining images from different modalities become very important and medical image fusion has emerged as a new promising research field
High Speed Data Exchange Algorithm in Telemedicine with Wavelet based on 4D M...Dr. Amarjeet Singh
Existing Medical imaging techniques such as fMRI, positron emission tomography (PET), dynamic 3D ultrasound and dynamic computerized tomography yield large amounts of four-dimensional sets. 4D medical data sets are the series of volumetric images netted in time, large in size and demand a great of assets for storage and transmission. Here, in this paper, we present a method wherein 3D image is taken and Discrete Wavelet Transform(DWT) and Dual-Tree Complex Wavelet Transform(DTCWT) techniques are applied separately on it and the image is split into sub-bands. The encoding and decoding are done using 3D-SPIHT, at different bit per pixels(bpp). The reconstructed image is synthesized using Inverse DWT technique. The quality of the compressed image has been evaluated using some factors such as Mean Square Error(MSE) and Peak-Signal to Noise Ratio (PSNR).
3-D WAVELET CODEC (COMPRESSION/DECOMPRESSION) FOR 3-D MEDICAL IMAGESijitcs
Compression is an important part in image processing in order to save memory space and reduce the
bandwidth while transmitting. The main purpose of this paper is to analyse the performance of 3-D wavelet
encoders using 3-D medical images. Four wavelet transforms, namely, Daubechies 4,Daubechies
6,Cohen-Daubechies-Feauveau 9/7 and Cohen Daubechies-Feauveau5/3 are used in the first stage with
encoders such as 3-D SPIHT,3-D SPECK and 3-D BISK used in the second stage for the compression.
Experiments are performed using medical test image such as magnetic resonance images (MRI) and X-ray
angiograms (XA). The XA and MR image slices are grouped into 4, 8 and 16 slices and the wavelet
transforms and encoding schemes are applied to identify the best wavelet encoder combination. The
performances of the proposed scheme are evaluated in terms of peak signal to noise ratio and bit rate.
Secure High Capacity Data Hiding in Images using EDBTCijsrd.com
Block truncation coding is an efficient compression technique which has low computational complexity but it has two major issues like blocking and false counter effects .So here we have used error-diffused BTC that improves above deficiencies using visual low pass compensation on the bitmap. In this paper complementary hiding EDBTC is developed to resolve the above issue. In this project a single water mark is embedded and then multiple water marks are embedded .usually we use an adaptive external bias factor to embed the watermark but it damages the image quality and robustness .So here we use an extremely small bias factor to control the watermark embedding and this enables a high capacity scenario without significantly damaging image quality. Until now a few data hiding schemes are proposed but it damages the characteristics of BTC. The security of embedded water mark is high that it can’t be easy extracted by the malicious users. The watermark is encrypted by standard encryption algorithm and then it is embedded.
Performance Analysis of CRT for Image Encryption ijcisjournal
With the fast advancements of information technology, the security of image data transmitted or stored over
internet is become very difficult. To hide the details, an effective method is encryption, so that only
authorized persons can decrypt the image with the keys available. Since the default features of digital
image such as high capacity data, large redundancy and large similarities among pixels, the conventional
encryption algorithms such as AES, , DES, 3DES, and Blow Fish, are not applicable for real time image
encryption. This paper presents the performance of CRT for image encryption to secure storage and
transmission of image over internet.
Copy Move Forgery Detection Using GLCM Based Statistical Features ijcisjournal
The features Gray Level Co-occurrence Matrix (GLCM) are mostly explored in Face Recognition and
CBIR. GLCM technique is explored here for Copy-Move Forgery Detection. GLCMs are extracted from all
the images in the database and statistics such as contrast, correlation, homogeneity and energy are
derived. These statistics form the feature vector. Support Vector Machine (SVM) is trained on all these
features and the authenticity of the image is decided by SVM classifier. The proposed work is evaluated on
CoMoFoD database, on a whole 1200 forged and processed images are tested. The performance analysis
of the present work is evaluated with the recent methods.
Perimetric Complexity of Binary Digital ImagesRSARANYADEVI
Perimetric complexity is a measure of the complexity of binary pictures. It is defined as the sum of inside and outside perimeters of the foreground, squared, divided by the foreground area, divided by . Difficulties arise when this definition is applied to digital images composed of binary pixels. In this article we identify these problems and propose solutions. Perimetric complexity is often used as a measure of visual complexity, in which case it should take into account the limited resolution of the visual system. We propose a measure of visual perimetric complexity that meets this requirement.
An enhanced difference pair mapping steganography method to improve embedding...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Fourth order improved finite difference approach to pure bending analysis o...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
VARIATION-FREE WATERMARKING TECHNIQUE BASED ON SCALE RELATIONSHIPcsandit
Most watermark methods use pixel values or coefficients as the judgment condition to embed or
extract a watermark image. The variation of these values may lead to the inaccurate condition
such that an incorrect judgment has been laid out. To avoid this problem, we design a stable
judgment mechanism, in which the outcome will not be seriously influenced by the variation.
The principle of judgment depends on the scale relationship of two pixels. From the observation
of common signal processing operations, we can find that the pixel value of processed image
usually keeps stable unless an image has been manipulated by cropping attack or halftone
transformation. This can greatly help reduce the modification strength from image processing
operations. Experiment results show that the proposed method can resist various attacks and
keep the image quality friendly.
Background Estimation Using Principal Component Analysis Based on Limited Mem...IJECEIAES
Given a video of 푀 frames of size ℎ × 푤. Background components of a video are the elements matrix which relative constant over 푀 frames. In PCA (principal component analysis) method these elements are referred as “principal components”. In video processing, background subtraction means excision of background component from the video. PCA method is used to get the background component. This method transforms 3 dimensions video (ℎ × 푤 × 푀) into 2 dimensions one (푁 × 푀), where 푁 is a linear array of size ℎ × 푤 . The principal components are the dominant eigenvectors which are the basis of an eigenspace. The limited memory block Krylov subspace optimization then is proposed to improve performance the computation. Background estimation is obtained as the projection each input image (the first frame at each sequence image) onto space expanded principal component. The procedure was run for the standard dataset namely SBI (Scene Background Initialization) dataset consisting of 8 videos with interval resolution [146 150, 352 240], total frame [258,500]. The performances are shown with 8 metrics, especially (in average for 8 videos) percentage of error pixels (0.24%), the percentage of clustered error pixels (0.21%), multiscale structural similarity index (0.88 form maximum 1), and running time (61.68 seconds).
BIG DATA-DRIVEN FAST REDUCING THE VISUAL BLOCK ARTIFACTS OF DCT COMPRESSED IM...IJDKP
The Urban Surveillance Systems generate huge amount of video and image data and impose high pressure
onto the recording disks. It is obvious that the research of video is a key point of big data research areas.
Since videos are composed of images, the degree and efficiency of image compression are of great
importance. Although the DCT based JPEG standard are widely used, it encounters insurmountable
problems. For instance, image encoding deficiencies such as block artifacts have to be removed frequently.
In this paper, we propose a new, simple but effective method to fast reduce the visual block artifacts of DCT
compressed images for urban surveillance systems. The simulation results demonstrate that our proposed
method achieves better quality than widely used filters while consuming much less computer CPU
resources.
Blind Image Seperation Using Forward Difference Method (FDM)sipij
In this paper, blind image separation is performed, exploiting the property of sparseness to represent images. A new sparse representation called forward difference method is proposed. It is known that most of the independent component analysis (ICA) basis functions, extracted from images are sparse and gives unreliable sparseness measure. In the proposed method, the image mixture is first transformed to sparse images. These images are divided into blocks and for each block the sparseness measure ε0 norm is applied. The block having the most sparseness is considered to determine the separation matrix. The efficiency of the proposed method is compared with other sparse representation functions.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
A STUDY OF METHODS FOR TRAINING WITH DIFFERENT DATASETS IN IMAGE CLASSIFICATIONADEIJ Journal
This research developed a training method of Convolutional Neural Network model with multiple datasets to achieve good performance on both datasets. Two different methods of training with two characteristically different datasets with identical categories, one with very clean images and one with real-world data, were proposed and studied. The model used for the study was a neural network derived from ResNet. Mixed training was shown to produce the best accuracies for each dataset when the dataset is mixed into the training set at the highest proportion, and the best combined performance when the realworld dataset was mixed in at a ratio of around 70%. This ratio produced a top-1 combined performance of 63.8% (no mixing produced 30.8%) and a top-3 combined performance of 83.0% (no mixing produced 55.3%). This research also showed that iterative training has a worse combined performance than mixed training due to the issue of fast forgetting.
Wavelet Transform based Medical Image Fusion With different fusion methodsIJERA Editor
This paper proposes wavelet transform based image fusion algorithm, after studying the principles and characteristics of the discrete wavelet transform. Medical image fusion used to derive useful information from multimodality medical images. The idea is to improve the image content by fusing images like computer tomography (CT) and magnetic resonance imaging (MRI) images, so as to provide more information to the doctor and clinical treatment planning system. This paper based on the wavelet transformation to fused the medical images. The wavelet based fusion algorithms used on medical images CT and MRI, This involve the fusion with MIN , MAX, MEAN method. Also the result is obtained. With more available multimodality medical images in clinical applications, the idea of combining images from different modalities become very important and medical image fusion has emerged as a new promising research field
High Speed Data Exchange Algorithm in Telemedicine with Wavelet based on 4D M...Dr. Amarjeet Singh
Existing Medical imaging techniques such as fMRI, positron emission tomography (PET), dynamic 3D ultrasound and dynamic computerized tomography yield large amounts of four-dimensional sets. 4D medical data sets are the series of volumetric images netted in time, large in size and demand a great of assets for storage and transmission. Here, in this paper, we present a method wherein 3D image is taken and Discrete Wavelet Transform(DWT) and Dual-Tree Complex Wavelet Transform(DTCWT) techniques are applied separately on it and the image is split into sub-bands. The encoding and decoding are done using 3D-SPIHT, at different bit per pixels(bpp). The reconstructed image is synthesized using Inverse DWT technique. The quality of the compressed image has been evaluated using some factors such as Mean Square Error(MSE) and Peak-Signal to Noise Ratio (PSNR).
3-D WAVELET CODEC (COMPRESSION/DECOMPRESSION) FOR 3-D MEDICAL IMAGESijitcs
Compression is an important part in image processing in order to save memory space and reduce the
bandwidth while transmitting. The main purpose of this paper is to analyse the performance of 3-D wavelet
encoders using 3-D medical images. Four wavelet transforms, namely, Daubechies 4,Daubechies
6,Cohen-Daubechies-Feauveau 9/7 and Cohen Daubechies-Feauveau5/3 are used in the first stage with
encoders such as 3-D SPIHT,3-D SPECK and 3-D BISK used in the second stage for the compression.
Experiments are performed using medical test image such as magnetic resonance images (MRI) and X-ray
angiograms (XA). The XA and MR image slices are grouped into 4, 8 and 16 slices and the wavelet
transforms and encoding schemes are applied to identify the best wavelet encoder combination. The
performances of the proposed scheme are evaluated in terms of peak signal to noise ratio and bit rate.
SLIC Superpixel Based Self Organizing Maps Algorithm for Segmentation of Micr...IJAAS Team
We can find the simultaneous monitoring of thousands of genes in parallel Microarray technology. As per these measurements, microarray technology have proven powerful in gene expression profiling for discovering new types of diseases and for predicting the type of a disease. Gridding, Intensity extraction, Enhancement and Segmentation are important steps in microarray image analysis. This paper gives simple linear iterative clustering (SLIC) based self organizing maps (SOM) algorithm for segmentation of microarray image. The clusters of pixels which share similar features are called Superpixels, thus they can be used as mid-level units to decrease the computational cost in many vision applications. The proposed algorithm utilizes superpixels as clustering objects instead of pixels. The qualitative and quantitative analysis shows that the proposed method produces better segmentation quality than k-means, fuzzy cmeans and self organizing maps clustering methods.
Quality Measurements of Lossy Image Steganography Based on H-AMBTC Technique ...AM Publications,India
Steganography is a type of image information concealing technique which hides a secret message in a different media such as image, video and audio etc, called cover file. The main idea of steganography is to provide security to private or public data. In this paper we combined among Hadamard transformation and Absolute Moment Block Truncation Coding to make a new concept called (H-AMBTC), this concept used for compressing the cover file and conceal the secret data into the cover file. The H-AMBTC compression is not only image compression, but it is more than the AMBTC technique as only half of the number of pixels in the binary converted image are transmitted. In this paper, the comparison process of the H-AMBTC technique is done for 2x2, 4x4, 8x8 and 16x16 block sizes. H-AMBTC is a lossy technique as the cover image and the secret image can be recovered completely.
Enhancement of genetic image watermarking robust against cropping attackijfcstjournal
The enhancement of image watermarking algorithm robust against particular attack by using genetic
algorithm is presented here. There is a trade-off between imperceptibility and robustness in image
watermarking. To preserve both of these characteristics in digital image watermarking in a logical value,
the genetic algorithm is used. Some factors were introduced for providing robustness of image
watermarking against cropping attack such as the Centre of Interest Proximity Factor (CIPF), the
Complexity Factor (CF) and the Priority Coefficient (PC).
A novel rrw framework to resist accidental attackseSAT Journals
Abstract Robust reversible watermarking (RRW) methods are popular in multimedia for protecting copyright, while preserving intactness of host images and providing robustness against unintentional attacks. Robust reversible watermarking (RRW) is used to protect the copyrights and providing robustness against unintentional attacks. The past histogram rotation-based methods suffer from extremely poor invisibility for watermarked images and limited robustness in extracting watermarks from the watermarked images destroyed by unintentional attacks. This paper proposes a wavelet-domain statistical quantity histogram shifting and clustering (WSQH-SC) method and Enhanced pixel-wise masking (EPWM). This method embeds a new watermark image and extraction procedures by histogram shifting and clustering, which are important for improving robustness and reducing run-time complexity. It is possible reversibility and invisibility. By using WSQH-SC methods reversibility, invisibility of watermarks can be achieved. The experimental results show the comprehensive performance in terms of reversibility, robustness, invisibility, capacity and run-time complexity widely applicable to different kinds of images. Keywords: — Integer wavelet transform, k-means clustering, masking, robust reversible watermarking (RRW)
Analysis of the Iriscode Bioencoding SchemeCSCJournals
Cancelable biometrics is a technique used to enhance security and user privacy. These schemes are employed to generate multiple revocable data from the original biometric template. In this paper, the security of binary template transformations is evaluated, through a new transformation for iris templates, called bioencoding scheme. This transformation and its security is analyzed, using Boolean functions and non linear Boolean systems. A general discussion on binary template transformations is finally proposed.
AN ENHANCED SEPARABLE REVERSIBLE DATA HIDING IN ENCRYPTED IMAGES USING SIDE M...Editor IJMTER
This paper proposes a scheme for Enhanced Separable Reversible Data Hiding in
Encrypted images Using Side Match. In the first step the original image is encrypted using an
encryption key. Then additional data is embedded into the image by modifying a small portion of the
encrypted image using a data hiding key. With an encrypted image containing additional data, if a
receiver has the data hiding key, he can extract the additional data. If the receiver has the encryption
key, he can decrypt the image, but cannot extract the additional data. If the receiver has both the data
hiding key and encryption key, he can extract the additional data and recover the original content by
exploiting the spatial correlation in natural images. The accuracy of data extraction is improved by
using a better scheme for measuring the smoothness of the received image, and uses the Side Match
scheme to further decrease the error rate of extracted bits.
We present a new image compression method to improve visual perception of the decompressed images and achieve higher image compression ratio. This method balances between the compression rate and image quality by compressing the essential parts of the image-edges. The key subject/edge is of more significance than background/non-edge image. Taking into consideration the value of image components and the effect of smoothness in image compression, this method classifies the image components as edge or non-edge. Low-quality lossy compression is applied to non-edge components whereas high-quality lossy compression is applied to edge components. Outcomes show that our suggested method is efficient in terms of compression ratio, bits per-pixel and peak signal to noise ratio.
Hybrid medical image compression method using quincunx wavelet and geometric ...journalBEEI
The purpose of this article is to find an efficient and optimal method of compression by reducing the file size while retaining the information for a good quality processing and to produce credible pathological reports, based on the extraction of the information characteristics contained in medical images. In this article, we proposed a novel medical image compression that combines geometric active contour model and quincunx wavelet transform. In this method it is necessary to localize the region of interest, where we tried to localize all the part that contain the pathological, using the level set for an optimal reduction, then we use the quincunx wavelet coupled with the set partitioning in hierarchical trees (SPIHT) algorithm. After testing several algorithms we noticed that the proposed method gives satisfactory results. The comparison of the experimental results is based on parameters of evaluation.
Lossless 4D Medical Images Compression Using Adaptive Inter Slices FilteringIJAAS Team
Recent lossless 4D medical images compression works are based on the application of techniques originated from video compression to efficiently eliminate redundancies in different dimensions of image. In this context we present a new approach of lossless 4D medical images compression which consists to application of 2D wavelet transform in spatial directions followed or not by either lifting transform or motion compensation in inter slices direction, the obtained slices are coded by 3D SPIHT. Our approach was compared with 3D SPIHT with/without motion compensation. The results show our approach offers better performance in lossless compression rate.
Wavelet based Image Coding Schemes: A Recent Survey ijsc
A variety of new and powerful algorithms have been developed for image compression over the years. Among them the wavelet-based image compression schemes have gained much popularity due to their overlapping nature which reduces the blocking artifacts that are common phenomena in JPEG compression and multiresolution character which leads to superior energy compaction with high quality reconstructed images. This paper provides a detailed survey on some of the popular wavelet coding techniques such as the Embedded Zerotree Wavelet (EZW) coding, Set Partitioning in Hierarchical Tree (SPIHT) coding, the Set Partitioned Embedded Block (SPECK) Coder, and the Embedded Block Coding with Optimized Truncation (EBCOT) algorithm. Other wavelet-based coding techniques like the Wavelet Difference Reduction (WDR) and the Adaptive Scanned Wavelet Difference Reduction (ASWDR) algorithms, the Space Frequency Quantization (SFQ) algorithm, the Embedded Predictive Wavelet Image Coder (EPWIC), Compression with Reversible Embedded Wavelet (CREW), the Stack-Run (SR) coding and the recent Geometric Wavelet (GW) coding are also discussed. Based on the review, recommendations and discussions are presented for algorithm development and implementation.
International Journal on Soft Computing ( IJSC )ijsc
A variety of new and powerful algorithms have been developed for image compression over the years.
Among them the wavelet-based image compression schemes have gained much popularity due to their
overlapping nature which reduces the blocking artifacts that are common phenomena in JPEG
compression and multiresolution character which leads to superior energy compaction with high quality
reconstructed images. This paper provides a detailed survey on some of the popular wavelet coding
techniques such as the Embedded Zerotree Wavelet (EZW) coding, Set Partitioning in Hierarchical Tree
(SPIHT) coding, the Set Partitioned Embedded Block (SPECK) Coder, and the Embedded Block Coding
with Optimized Truncation (EBCOT) algorithm. Other wavelet-based coding techniques like the Wavelet
Difference Reduction (WDR) and the Adaptive Scanned Wavelet Difference Reduction (ASWDR)
algorithms, the Space Frequency Quantization (SFQ) algorithm, the Embedded Predictive Wavelet Image
Coder (EPWIC), Compression with Reversible Embedded Wavelet (CREW), the Stack-Run (SR) coding and
the recent Geometric Wavelet (GW) coding are also discussed. Based on the review, recommendations and
discussions are presented for algorithm development and implementation.
MINIMIZING DISTORTION IN STEGANOG-RAPHY BASED ON IMAGE FEATUREijcsit
There are two defects in WOW. One is image feature is not considered when hiding information through minimal distortion path and it leads to high total distortion. Another is total distortion grows too rapidly with hidden capacity increasing and it leads to poor anti-detection when hidden capacity is large. To solve these two problems, a new algorithm named MDIS was proposed. MDIS is also based on the minimizing additive distortion framework of STC and has the same distortion function with WOW. The feature that there are a large number of pixels, having the same value with one of their eight neighbour pixels and the mechanism of secret sharing are used in MDIS, which can reduce the total distortion, improve the antidetection and increase the value of PNSR. Experimental results showed that MDIS has better invisibility, smaller distortion and stronger anti-detection than WOW.
There are two defects in WOW. One is image feature is not considered when hiding information through minimal distortion path and it leads to high total distortion. Another is total distortion grows too rapidly with hidden capacity increasing and it leads to poor anti-detection when hidden capacity is large. To solve these two problems, a new algorithm named MDIS was proposed. MDIS is also based on the minimizing additive distortion framework of STC and has the same distortion function with WOW. The feature that there are a large number of pixels, having the same value with one of their eight neighbour pixels and the mechanism of secret sharing are used in MDIS, which can reduce the total distortion, improve the antidetection and increase the value of PNSR. Experimental results showed that MDIS has better invisibility, smaller distortion and stronger anti-detection than WOW.
APPLICATION OF IMAGE FUSION FOR ENHANCING THE QUALITY OF AN IMAGEcscpconf
Advances in technology have brought about extensive research in the field of image fusion.
Image fusion is one of the most researched challenges of Face Recognition. Face Recognition
(FR) is the process by which the brain and mind understand, interpret and identify or verify
human faces.. Image fusion is the combination of two or more source images which vary in
resolution, instrument modality, or image capture technique into a single composite
representation. Thus, the source images are complementary in many ways, with no one input
image being an adequate data representation of the scene. Therefore, the goal of an image
fusion algorithm is to integrate the redundant and complementary information obtained from
the source images in order to form a new image which provides a better description of the scene
for human or machine perception. In this paper we have proposed a novel approach of pixel
level image fusion using PCA that will remove the image blurredness in two images and
reconstruct a new de-blurred fused image. The proposed approach is based on the calculation
of Eigen faces with Principal Component Analysis (PCA). Principal Component Analysis (PCA)
has been most widely used method for dimensionality reduction and feature extraction
Similar to New Watermarking/Encryption Method for Medical ImagesFull Protection in m-Health (20)
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
Neural network optimizer of proportional-integral-differential controller par...IJECEIAES
Wide application of proportional-integral-differential (PID)-regulator in industry requires constant improvement of methods of its parameters adjustment. The paper deals with the issues of optimization of PID-regulator parameters with the use of neural network technology methods. A methodology for choosing the architecture (structure) of neural network optimizer is proposed, which consists in determining the number of layers, the number of neurons in each layer, as well as the form and type of activation function. Algorithms of neural network training based on the application of the method of minimizing the mismatch between the regulated value and the target value are developed. The method of back propagation of gradients is proposed to select the optimal training rate of neurons of the neural network. The neural network optimizer, which is a superstructure of the linear PID controller, allows increasing the regulation accuracy from 0.23 to 0.09, thus reducing the power consumption from 65% to 53%. The results of the conducted experiments allow us to conclude that the created neural superstructure may well become a prototype of an automatic voltage regulator (AVR)-type industrial controller for tuning the parameters of the PID controller.
An improved modulation technique suitable for a three level flying capacitor ...IJECEIAES
This research paper introduces an innovative modulation technique for controlling a 3-level flying capacitor multilevel inverter (FCMLI), aiming to streamline the modulation process in contrast to conventional methods. The proposed
simplified modulation technique paves the way for more straightforward and
efficient control of multilevel inverters, enabling their widespread adoption and
integration into modern power electronic systems. Through the amalgamation of
sinusoidal pulse width modulation (SPWM) with a high-frequency square wave
pulse, this controlling technique attains energy equilibrium across the coupling
capacitor. The modulation scheme incorporates a simplified switching pattern
and a decreased count of voltage references, thereby simplifying the control
algorithm.
A review on features and methods of potential fishing zoneIJECEIAES
This review focuses on the importance of identifying potential fishing zones in seawater for sustainable fishing practices. It explores features like sea surface temperature (SST) and sea surface height (SSH), along with classification methods such as classifiers. The features like SST, SSH, and different classifiers used to classify the data, have been figured out in this review study. This study underscores the importance of examining potential fishing zones using advanced analytical techniques. It thoroughly explores the methodologies employed by researchers, covering both past and current approaches. The examination centers on data characteristics and the application of classification algorithms for classification of potential fishing zones. Furthermore, the prediction of potential fishing zones relies significantly on the effectiveness of classification algorithms. Previous research has assessed the performance of models like support vector machines, naïve Bayes, and artificial neural networks (ANN). In the previous result, the results of support vector machine (SVM) were 97.6% more accurate than naive Bayes's 94.2% to classify test data for fisheries classification. By considering the recent works in this area, several recommendations for future works are presented to further improve the performance of the potential fishing zone models, which is important to the fisheries community.
Electrical signal interference minimization using appropriate core material f...IJECEIAES
As demand for smaller, quicker, and more powerful devices rises, Moore's law is strictly followed. The industry has worked hard to make little devices that boost productivity. The goal is to optimize device density. Scientists are reducing connection delays to improve circuit performance. This helped them understand three-dimensional integrated circuit (3D IC) concepts, which stack active devices and create vertical connections to diminish latency and lower interconnects. Electrical involvement is a big worry with 3D integrates circuits. Researchers have developed and tested through silicon via (TSV) and substrates to decrease electrical wave involvement. This study illustrates a novel noise coupling reduction method using several electrical involvement models. A 22% drop in electrical involvement from wave-carrying to victim TSVs introduces this new paradigm and improves system performance even at higher THz frequencies.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
Bibliometric analysis highlighting the role of women in addressing climate ch...IJECEIAES
Fossil fuel consumption increased quickly, contributing to climate change
that is evident in unusual flooding and draughts, and global warming. Over
the past ten years, women's involvement in society has grown dramatically,
and they succeeded in playing a noticeable role in reducing climate change.
A bibliometric analysis of data from the last ten years has been carried out to
examine the role of women in addressing the climate change. The analysis's
findings discussed the relevant to the sustainable development goals (SDGs),
particularly SDG 7 and SDG 13. The results considered contributions made
by women in the various sectors while taking geographic dispersion into
account. The bibliometric analysis delves into topics including women's
leadership in environmental groups, their involvement in policymaking, their
contributions to sustainable development projects, and the influence of
gender diversity on attempts to mitigate climate change. This study's results
highlight how women have influenced policies and actions related to climate
change, point out areas of research deficiency and recommendations on how
to increase role of the women in addressing the climate change and
achieving sustainability. To achieve more successful results, this initiative
aims to highlight the significance of gender equality and encourage
inclusivity in climate change decision-making processes.
Voltage and frequency control of microgrid in presence of micro-turbine inter...IJECEIAES
The active and reactive load changes have a significant impact on voltage
and frequency. In this paper, in order to stabilize the microgrid (MG) against
load variations in islanding mode, the active and reactive power of all
distributed generators (DGs), including energy storage (battery), diesel
generator, and micro-turbine, are controlled. The micro-turbine generator is
connected to MG through a three-phase to three-phase matrix converter, and
the droop control method is applied for controlling the voltage and
frequency of MG. In addition, a method is introduced for voltage and
frequency control of micro-turbines in the transition state from gridconnected mode to islanding mode. A novel switching strategy of the matrix
converter is used for converting the high-frequency output voltage of the
micro-turbine to the grid-side frequency of the utility system. Moreover,
using the switching strategy, the low-order harmonics in the output current
and voltage are not produced, and consequently, the size of the output filter
would be reduced. In fact, the suggested control strategy is load-independent
and has no frequency conversion restrictions. The proposed approach for
voltage and frequency regulation demonstrates exceptional performance and
favorable response across various load alteration scenarios. The suggested
strategy is examined in several scenarios in the MG test systems, and the
simulation results are addressed.
Enhancing battery system identification: nonlinear autoregressive modeling fo...IJECEIAES
Precisely characterizing Li-ion batteries is essential for optimizing their
performance, enhancing safety, and prolonging their lifespan across various
applications, such as electric vehicles and renewable energy systems. This
article introduces an innovative nonlinear methodology for system
identification of a Li-ion battery, employing a nonlinear autoregressive with
exogenous inputs (NARX) model. The proposed approach integrates the
benefits of nonlinear modeling with the adaptability of the NARX structure,
facilitating a more comprehensive representation of the intricate
electrochemical processes within the battery. Experimental data collected
from a Li-ion battery operating under diverse scenarios are employed to
validate the effectiveness of the proposed methodology. The identified
NARX model exhibits superior accuracy in predicting the battery's behavior
compared to traditional linear models. This study underscores the
importance of accounting for nonlinearities in battery modeling, providing
insights into the intricate relationships between state-of-charge, voltage, and
current under dynamic conditions.
Smart grid deployment: from a bibliometric analysis to a surveyIJECEIAES
Smart grids are one of the last decades' innovations in electrical energy.
They bring relevant advantages compared to the traditional grid and
significant interest from the research community. Assessing the field's
evolution is essential to propose guidelines for facing new and future smart
grid challenges. In addition, knowing the main technologies involved in the
deployment of smart grids (SGs) is important to highlight possible
shortcomings that can be mitigated by developing new tools. This paper
contributes to the research trends mentioned above by focusing on two
objectives. First, a bibliometric analysis is presented to give an overview of
the current research level about smart grid deployment. Second, a survey of
the main technological approaches used for smart grid implementation and
their contributions are highlighted. To that effect, we searched the Web of
Science (WoS), and the Scopus databases. We obtained 5,663 documents
from WoS and 7,215 from Scopus on smart grid implementation or
deployment. With the extraction limitation in the Scopus database, 5,872 of
the 7,215 documents were extracted using a multi-step process. These two
datasets have been analyzed using a bibliometric tool called bibliometrix.
The main outputs are presented with some recommendations for future
research.
Use of analytical hierarchy process for selecting and prioritizing islanding ...IJECEIAES
One of the problems that are associated to power systems is islanding
condition, which must be rapidly and properly detected to prevent any
negative consequences on the system's protection, stability, and security.
This paper offers a thorough overview of several islanding detection
strategies, which are divided into two categories: classic approaches,
including local and remote approaches, and modern techniques, including
techniques based on signal processing and computational intelligence.
Additionally, each approach is compared and assessed based on several
factors, including implementation costs, non-detected zones, declining
power quality, and response times using the analytical hierarchy process
(AHP). The multi-criteria decision-making analysis shows that the overall
weight of passive methods (24.7%), active methods (7.8%), hybrid methods
(5.6%), remote methods (14.5%), signal processing-based methods (26.6%),
and computational intelligent-based methods (20.8%) based on the
comparison of all criteria together. Thus, it can be seen from the total weight
that hybrid approaches are the least suitable to be chosen, while signal
processing-based methods are the most appropriate islanding detection
method to be selected and implemented in power system with respect to the
aforementioned factors. Using Expert Choice software, the proposed
hierarchy model is studied and examined.
Enhancing of single-stage grid-connected photovoltaic system using fuzzy logi...IJECEIAES
The power generated by photovoltaic (PV) systems is influenced by
environmental factors. This variability hampers the control and utilization of
solar cells' peak output. In this study, a single-stage grid-connected PV
system is designed to enhance power quality. Our approach employs fuzzy
logic in the direct power control (DPC) of a three-phase voltage source
inverter (VSI), enabling seamless integration of the PV connected to the
grid. Additionally, a fuzzy logic-based maximum power point tracking
(MPPT) controller is adopted, which outperforms traditional methods like
incremental conductance (INC) in enhancing solar cell efficiency and
minimizing the response time. Moreover, the inverter's real-time active and
reactive power is directly managed to achieve a unity power factor (UPF).
The system's performance is assessed through MATLAB/Simulink
implementation, showing marked improvement over conventional methods,
particularly in steady-state and varying weather conditions. For solar
irradiances of 500 and 1,000 W/m2
, the results show that the proposed
method reduces the total harmonic distortion (THD) of the injected current
to the grid by approximately 46% and 38% compared to conventional
methods, respectively. Furthermore, we compare the simulation results with
IEEE standards to evaluate the system's grid compatibility.
Enhancing photovoltaic system maximum power point tracking with fuzzy logic-b...IJECEIAES
Photovoltaic systems have emerged as a promising energy resource that
caters to the future needs of society, owing to their renewable, inexhaustible,
and cost-free nature. The power output of these systems relies on solar cell
radiation and temperature. In order to mitigate the dependence on
atmospheric conditions and enhance power tracking, a conventional
approach has been improved by integrating various methods. To optimize
the generation of electricity from solar systems, the maximum power point
tracking (MPPT) technique is employed. To overcome limitations such as
steady-state voltage oscillations and improve transient response, two
traditional MPPT methods, namely fuzzy logic controller (FLC) and perturb
and observe (P&O), have been modified. This research paper aims to
simulate and validate the step size of the proposed modified P&O and FLC
techniques within the MPPT algorithm using MATLAB/Simulink for
efficient power tracking in photovoltaic systems.
Adaptive synchronous sliding control for a robot manipulator based on neural ...IJECEIAES
Robot manipulators have become important equipment in production lines, medical fields, and transportation. Improving the quality of trajectory tracking for
robot hands is always an attractive topic in the research community. This is a
challenging problem because robot manipulators are complex nonlinear systems
and are often subject to fluctuations in loads and external disturbances. This
article proposes an adaptive synchronous sliding control scheme to improve trajectory tracking performance for a robot manipulator. The proposed controller
ensures that the positions of the joints track the desired trajectory, synchronize
the errors, and significantly reduces chattering. First, the synchronous tracking
errors and synchronous sliding surfaces are presented. Second, the synchronous
tracking error dynamics are determined. Third, a robust adaptive control law is
designed,the unknown components of the model are estimated online by the neural network, and the parameters of the switching elements are selected by fuzzy
logic. The built algorithm ensures that the tracking and approximation errors
are ultimately uniformly bounded (UUB). Finally, the effectiveness of the constructed algorithm is demonstrated through simulation and experimental results.
Simulation and experimental results show that the proposed controller is effective with small synchronous tracking errors, and the chattering phenomenon is
significantly reduced.
Remote field-programmable gate array laboratory for signal acquisition and de...IJECEIAES
A remote laboratory utilizing field-programmable gate array (FPGA) technologies enhances students’ learning experience anywhere and anytime in embedded system design. Existing remote laboratories prioritize hardware access and visual feedback for observing board behavior after programming, neglecting comprehensive debugging tools to resolve errors that require internal signal acquisition. This paper proposes a novel remote embeddedsystem design approach targeting FPGA technologies that are fully interactive via a web-based platform. Our solution provides FPGA board access and debugging capabilities beyond the visual feedback provided by existing remote laboratories. We implemented a lab module that allows users to seamlessly incorporate into their FPGA design. The module minimizes hardware resource utilization while enabling the acquisition of a large number of data samples from the signal during the experiments by adaptively compressing the signal prior to data transmission. The results demonstrate an average compression ratio of 2.90 across three benchmark signals, indicating efficient signal acquisition and effective debugging and analysis. This method allows users to acquire more data samples than conventional methods. The proposed lab allows students to remotely test and debug their designs, bridging the gap between theory and practice in embedded system design.
Detecting and resolving feature envy through automated machine learning and m...IJECEIAES
Efficiently identifying and resolving code smells enhances software project quality. This paper presents a novel solution, utilizing automated machine learning (AutoML) techniques, to detect code smells and apply move method refactoring. By evaluating code metrics before and after refactoring, we assessed its impact on coupling, complexity, and cohesion. Key contributions of this research include a unique dataset for code smell classification and the development of models using AutoGluon for optimal performance. Furthermore, the study identifies the top 20 influential features in classifying feature envy, a well-known code smell, stemming from excessive reliance on external classes. We also explored how move method refactoring addresses feature envy, revealing reduced coupling and complexity, and improved cohesion, ultimately enhancing code quality. In summary, this research offers an empirical, data-driven approach, integrating AutoML and move method refactoring to optimize software project quality. Insights gained shed light on the benefits of refactoring on code quality and the significance of specific features in detecting feature envy. Future research can expand to explore additional refactoring techniques and a broader range of code metrics, advancing software engineering practices and standards.
Smart monitoring technique for solar cell systems using internet of things ba...IJECEIAES
Rapidly and remotely monitoring and receiving the solar cell systems status parameters, solar irradiance, temperature, and humidity, are critical issues in enhancement their efficiency. Hence, in the present article an improved smart prototype of internet of things (IoT) technique based on embedded system through NodeMCU ESP8266 (ESP-12E) was carried out experimentally. Three different regions at Egypt; Luxor, Cairo, and El-Beheira cities were chosen to study their solar irradiance profile, temperature, and humidity by the proposed IoT system. The monitoring data of solar irradiance, temperature, and humidity were live visualized directly by Ubidots through hypertext transfer protocol (HTTP) protocol. The measured solar power radiation in Luxor, Cairo, and El-Beheira ranged between 216-1000, 245-958, and 187-692 W/m 2 respectively during the solar day. The accuracy and rapidity of obtaining monitoring results using the proposed IoT system made it a strong candidate for application in monitoring solar cell systems. On the other hand, the obtained solar power radiation results of the three considered regions strongly candidate Luxor and Cairo as suitable places to build up a solar cells system station rather than El-Beheira.
An efficient security framework for intrusion detection and prevention in int...IJECEIAES
Over the past few years, the internet of things (IoT) has advanced to connect billions of smart devices to improve quality of life. However, anomalies or malicious intrusions pose several security loopholes, leading to performance degradation and threat to data security in IoT operations. Thereby, IoT security systems must keep an eye on and restrict unwanted events from occurring in the IoT network. Recently, various technical solutions based on machine learning (ML) models have been derived towards identifying and restricting unwanted events in IoT. However, most ML-based approaches are prone to miss-classification due to inappropriate feature selection. Additionally, most ML approaches applied to intrusion detection and prevention consider supervised learning, which requires a large amount of labeled data to be trained. Consequently, such complex datasets are impossible to source in a large network like IoT. To address this problem, this proposed study introduces an efficient learning mechanism to strengthen the IoT security aspects. The proposed algorithm incorporates supervised and unsupervised approaches to improve the learning models for intrusion detection and mitigation. Compared with the related works, the experimental outcome shows that the model performs well in a benchmark dataset. It accomplishes an improved detection accuracy of approximately 99.21%.
Developing a smart system for infant incubators using the internet of things ...IJECEIAES
This research is developing an incubator system that integrates the internet of things and artificial intelligence to improve care for premature babies. The system workflow starts with sensors that collect data from the incubator. Then, the data is sent in real-time to the internet of things (IoT) broker eclipse mosquito using the message queue telemetry transport (MQTT) protocol version 5.0. After that, the data is stored in a database for analysis using the long short-term memory network (LSTM) method and displayed in a web application using an application programming interface (API) service. Furthermore, the experimental results produce as many as 2,880 rows of data stored in the database. The correlation coefficient between the target attribute and other attributes ranges from 0.23 to 0.48. Next, several experiments were conducted to evaluate the model-predicted value on the test data. The best results are obtained using a two-layer LSTM configuration model, each with 60 neurons and a lookback setting 6. This model produces an R 2 value of 0.934, with a root mean square error (RMSE) value of 0.015 and a mean absolute error (MAE) of 0.008. In addition, the R 2 value was also evaluated for each attribute used as input, with a result of values between 0.590 and 0.845.
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsVictor Morales
K8sGPT is a tool that analyzes and diagnoses Kubernetes clusters. This presentation was used to share the requirements and dependencies to deploy K8sGPT in a local environment.
Harnessing WebAssembly for Real-time Stateless Streaming PipelinesChristina Lin
Traditionally, dealing with real-time data pipelines has involved significant overhead, even for straightforward tasks like data transformation or masking. However, in this talk, we’ll venture into the dynamic realm of WebAssembly (WASM) and discover how it can revolutionize the creation of stateless streaming pipelines within a Kafka (Redpanda) broker. These pipelines are adept at managing low-latency, high-data-volume scenarios.
HEAP SORT ILLUSTRATED WITH HEAPIFY, BUILD HEAP FOR DYNAMIC ARRAYS.
Heap sort is a comparison-based sorting technique based on Binary Heap data structure. It is similar to the selection sort where we first find the minimum element and place the minimum element at the beginning. Repeat the same process for the remaining elements.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
TOP 10 B TECH COLLEGES IN JAIPUR 2024.pptxnikitacareer3
Looking for the best engineering colleges in Jaipur for 2024?
Check out our list of the top 10 B.Tech colleges to help you make the right choice for your future career!
1) MNIT
2) MANIPAL UNIV
3) LNMIIT
4) NIMS UNIV
5) JECRC
6) VIVEKANANDA GLOBAL UNIV
7) BIT JAIPUR
8) APEX UNIV
9) AMITY UNIV.
10) JNU
TO KNOW MORE ABOUT COLLEGES, FEES AND PLACEMENT, WATCH THE FULL VIDEO GIVEN BELOW ON "TOP 10 B TECH COLLEGES IN JAIPUR"
https://www.youtube.com/watch?v=vSNje0MBh7g
VISIT CAREER MANTRA PORTAL TO KNOW MORE ABOUT COLLEGES/UNIVERSITITES in Jaipur:
https://careermantra.net/colleges/3378/Jaipur/b-tech
Get all the information you need to plan your next steps in your medical career with Career Mantra!
https://careermantra.net/
Hierarchical Digital Twin of a Naval Power SystemKerry Sado
A hierarchical digital twin of a Naval DC power system has been developed and experimentally verified. Similar to other state-of-the-art digital twins, this technology creates a digital replica of the physical system executed in real-time or faster, which can modify hardware controls. However, its advantage stems from distributing computational efforts by utilizing a hierarchical structure composed of lower-level digital twin blocks and a higher-level system digital twin. Each digital twin block is associated with a physical subsystem of the hardware and communicates with a singular system digital twin, which creates a system-level response. By extracting information from each level of the hierarchy, power system controls of the hardware were reconfigured autonomously. This hierarchical digital twin development offers several advantages over other digital twins, particularly in the field of naval power systems. The hierarchical structure allows for greater computational efficiency and scalability while the ability to autonomously reconfigure hardware controls offers increased flexibility and responsiveness. The hierarchical decomposition and models utilized were well aligned with the physical twin, as indicated by the maximum deviations between the developed digital twin hierarchy and the hardware.
2. ISSN: 2088-8708
IJECE Vol. 7, No. 6, December 2017 : 3385 – 3394
3386
The rest of this paper is organized as follows. In Section 2, we present the proposed methods which
is implemented in Section 3. Section 4, present the experimental results and the security analyze of the
proposed algorithms. Before concluding in Section 6, we discuss the results which are compared with other
methods in Section 5.
2. PROPOSED WATERMARKING/ENCRYPTION SYSTEM
In this section, we present the proposed security system for medical images. We start by the
encryption process then the first proposed watermarking and the second proposed watermarking. we end with
the combination of these three proposed aproachs.
2.1. Proposed imaging encryption schema
As shown in Figure 1, the encryption method manipulates the imaging by dividing it into blocks of
8x8 pixels. The i-th encrypted block is the exoring between the i-th block to be encrypted with the i-th block
k. We obtain the block k by:
1) In the first block, i.e. i=1 the block k is initialized by the matrix product of the key KE and, the addition of
its transposed and V.
2) when, i>1, we obtain the block k using the function F which has as parameter, the sum of previous
encrypted block, current block and the previous block k.
111 , ii
e
ii
e
i kBBFBB
111 kBBe
(1)
dep
EE vKKk 2mod'
1
where B the block to be encrypted, Be
the encrypted block and dep the imaging depth, KE is a column vector
used as key, and the function F is the N rotations of the operand on the right (ROR), the exoring of selected
part (sub-block of size n1) of block key and it’s symmetric (also sub-block of size n1), either 1 bit. F depend
of the angle 𝐵𝑖−1(𝑙, 𝑐) + 𝐵𝑖−1
𝑒
(𝑙, 𝑐) by64 [𝑛1]2⁄ . The expression of F is defined as the following:
clkS
SclkRORNkBF
iclk
clkiii
e
i
i
i
9,9
,,
,
,
(2)
where 𝑆 the sub-block symmetric of selected sub-block. As shown in Figure 2, we determine the part to
change (sub-block of size 𝑛1) using the rest of division of the sum of preview block 𝐵𝑖−1(𝑙, 𝑐) + 𝐵𝑖−1
𝑒
(𝑙, 𝑐)
by64 [𝑛1]2⁄ , which give the position of sub-block. Therefore, the block 𝑘 is divided on sub-block of size 𝑛1
the fast block is the block number 0 i.e. 𝑘𝑖(1: 𝑛1, 1: 𝑛1), it’s symmetric the block 𝑘𝑖(9 − 1: 9 − 𝑛1, 9 −
1: 9 − 𝑛1), where the parameter equal 𝑛1 to 0,1,2,4 or 8.
Figure 1. Scheme of the proposed encryption system. n1 is the parameter of configuration of the system (the
mode used in compression is for n1=0). V and KE are the key of encryption. F is given in Figure 3
XOR
Medical
Imaging
Block
[8X8]
KE
F
Encrypted
Block
Encrypted
Imaging
k
𝑛1
V n1
3. IJECE ISSN: 2088-8708
New Watermarking/Encryption Method for Medical Images Full Protection .... (Mohamed Boussif)
3387
Figure 2. The proposed chaotic system of block key (the function F). The inputs of the system are n1and the
sum of bloc i and its encrypted block, and the block key ki, the output is the block key ki+1
2.2. Proposed semi reversible robust hiding method
In this section, we hide the patient information in text form (Name, UID and the doctor report) in the
corresponding imaging. We first start by giving the mixing function M which consists in inserting the bits wi
in a pixel of the image (see Figure 3). The expression of this function is:
2
2
0if
2
12
1if
2
2
,,
i
i
i
i
i
i
i
i
i
ii
f
I
frk
w
ff
k
w
ff
k
wIM i
(3)
where 𝑟 allow conversion to nearest integer. 𝑓 and 𝜑𝑖 are the frequency and the i-th dephasing,
respectively. 𝑀 𝜑 𝑖,𝑓
the mixing function. 𝐼𝑖 and 𝑤𝑖 are the pixel to be watermarked and the watermark bit,
respectively. 𝑘𝑖 is an integer. For writing the mixing function as a single equation we replace 𝑘 with its
expression in equation (3):
ff
I
frwwIM i
i
i
iii
fi
22
22,,
(4)
where 𝑤𝑖̅̅̅ is the complement of 𝑤𝑖. Now, we simplify the equation (4) to find the final mixed function as
following:
22
22,, i
i
i
iii
f
I
frw
f
wIM i
(5)
The reciprocal function of the mixing function which is used to extract the watermark from the
image is given by the next equation:
2
cos1 i
wiex i
i
i
IfIMw
(6)
where 𝐼 𝑤 𝑖
and 𝑤𝑒𝑥 𝑖
are the i-th watermarked pixel and the i-th extracted watermark, respectively.
XOR
…
sub-block 0
sub-block n
𝒌𝒊
Encrypted
block(i) +block(i)
sum
𝟔𝟒 [𝒏 𝟏] 𝟐⁄
Modulo
B7 B6 B5 B4 B3 B2 B1
B0
N ROR with 1 bit
cossin
Find N
Sub-Block
n1xn1
Sub-Blocki+1
n1xn1
Symmetric
Sub-Block
Sub-Block
n1xn1
Repl
ace
𝒌𝒊+𝟏
S
el
ec
Selecti
on
4. ISSN: 2088-8708
IJECE Vol. 7, No. 6, December 2017 : 3385 – 3394
3388
We use a chaotic system to provide the dephasing 𝜑 which must be a sequence of real numbers
between -π and π depending on the insertion key 𝑘 𝑤. 𝜑 depend on its previous state and the key 𝑘 𝑤. 𝜑1 is
initialized by the rest of the real division of 𝑘 𝑤 and 𝜋. 𝜑n≥2 is equal to the product of preview threshold term
and the rest of division 𝜑n−1 + 𝑖 × 𝑘 by 𝜋, the expression of the sequence is given in next expression:
1mod
2mod1
ifork
iforkis
w
wii
i
(7)
where 𝑆𝑖 equal to -1 if |𝜑𝑖−1| superior or equal to 𝜋 2⁄ and equal to 1 if |𝜑𝑖−1| inferior to 𝜋 2⁄ .
In the second part of this section, we use the mixing function 𝑀 given in equation (5) to insert the
binary coded text in the transformed image. As shown in Figure 4, the first step, consist to divide the image
into blocks of size 8. For each block, we determine the 2D-DCT coefficient, then we select the lowest
coefficients where we insert, bit by bit, the watermark which must be converted to binary code. The insertion
process consists to add to the preview transformed block 𝑇[𝑏𝑙𝑜𝑐𝑘 𝑛−1] the mixing function of the difference
between the transformed block 𝑇[𝑏𝑙o𝑐𝑘 𝑛] and the preview transformed block 𝑇[𝑏𝑙𝑜𝑐𝑘 𝑛−1], where 𝑇 is the
transformation in DCT-2D. In other words, we obtain the watermarked case of each block by the next
expression:
jijinjin
f
jinjiBT wBTBTMBTW ji
n ,,1,
,
,1,
,,
(8)
where 𝐵𝑛,𝑇, 𝑀and 𝑊 are, respectively, the block 𝑛,the DCT-2D transformation, the mixing function and the
watermarked block. The watermarking process start for 𝑛 = 2, the first block of imaging is used for the
watermarking of second block.
Figure 3. Principe of our mixed function method. Pi is the pixel to be watermarked. cos(fx + φn) is th scale
of watermarking. We insert 1 if wi = 1, and we insert 0 if wi = 0.wPi
is the watermarked pixel Pi
We have done a semi restitution (semi reversible) of the watermarked imaging, we use the following
formula which is based on, the addition of 𝜋 𝑓⁄ if the extracted bit 𝑤 𝐸𝑥 equal to 1, the subtraction of 𝜋 𝑓⁄ if
the extracted bit 𝑤 𝐸𝑥 equal to -1:
f
BTBTMBTR jinjin
f
jinW
ji
jinBT
,1,
,1
,
,
,
(9)
where 𝑅 and 𝑀−1
are, respectively, the semi restituted block and the function of extraction which is given in
equation (6).
Cos (𝒇𝒙 + 𝝋 𝒏)
Insert 1
Insert 0
𝑷𝒊
𝑤 𝑃 𝑖
Reversibility
5. IJECE ISSN: 2088-8708
New Watermarking/Encryption Method for Medical Images Full Protection .... (Mohamed Boussif)
3389
Figure 4. The proposed watermarking method scheme
In the first iteration the return string is initialized by the first DCT block coefficient of image to be
watermarked. Therefore, the Insert procedure starts from block number 2. To note that the parameter f must
be fixed between 0.3 and 0.9. The expression of M is given in equation (6) and Z−1
is a retard delay with a
block
2.3. Proposed fragile watermarking method
In this section, we use a second watermarking before encryption for integrity check in encrypted
domain. So, we insert the imaging entropy, the variable 𝑉 𝑚𝑖𝑛 and the key 𝐾 𝑤. The objectives of this second
watermarking are: In the first, hide the key 𝐾 𝑤 and 𝑉 𝑚𝑖𝑛 which are used for insertion of medical report in
imaging. In the second, securing the imaging in encrypted domain against tamper. Therefore, we use the
classical fragile watermarking system which consists to insert the watermark in the LSB bits of imaging [11]-
[12]. Since, there are several researches that are worked with the attack of this type of watermarking as in
paper [13]-[14] we have obliged to secure this method using a novel proposed method which is based on the
choice of pixel to watermarked which is done by a choosing key (we can use the encryption key 𝐾𝐸). To note
that the quality in PSNR of image Lena watermarked by 144 bits is equal to 104. The 𝐾 𝑤 key must be
encoded in 16 bits, each of 𝑉 𝑚𝑖𝑛 and the entropy 𝐻 𝑤 must be encoded in 64 bits. Therefore, the total bits
which must be inserted is equal to 144 bits. The expression of watermarking process can be summarized as
follows:
118
12,, 1
NremiremiKC
l
M
fixiL
borandCLICLI
Ei
i
i
dep
iiiiw
(10)
where 𝑀 and 𝑁 are the sizes of the imaging to watermarked, respectively. 𝑙 and 𝑑𝑒𝑝, are the length of
watermark and imaging depth, respectively. 𝐿𝑖 and 𝐶𝑖 the indices of pixel to watermarked, respectively. 𝐼, 𝐼 𝑤
and 𝑏 are the imaging to watermarked, the watermarked imaging and the watermark, respectively. The
function 𝑓𝑖𝑥 and 𝑟𝑒𝑚, returns the integer part and integer division rest, respectively. In extractor level, we
determine the extracted watermark by as following:
1, andCLIiw iiwex (11)
where 𝑊𝑒𝑥 the extracted watermark, the expression of 𝐿𝑖 and 𝐶𝑖 are given in equation (10). Using the key 𝐾𝐸,
we determine the position of watermarked pixel, then, we extract the watermark 𝑤𝑒𝑥.
2.4. Proposed combined encryption/watermarking schema
In this section, we combine the three-proposed system: encryption system with the watermarking
methods. As illustrated in Figure 5, the medical imaging must be transformed into blocks of size 8x8, for
6. ISSN: 2088-8708
IJECE Vol. 7, No. 6, December 2017 : 3385 – 3394
3390
each block, we hide the patient information (as name, UID, diagnostic reporting) in the medical imaging
(hiding method is illustrated in Section 3.2). We insert the key of watermarking 𝐾 𝑤, the minimum value of
image 𝑉 𝑚𝑖𝑛 and the entropy 𝐻 image with the key 𝐾𝐸 before the encryption process which consists to encrypt
the watermarked image on block of size 8x8(encryption method is illustrated in Section 3.1). The insertion of
the key 𝐾 𝑤, 𝑉 𝑚𝑖𝑛 and the entropy are illustrated in Section 3.3. Therefore, the input of our overall
Encryption/watermarking system are the medical imaging to watermarking/encryption, patient’s information
(name, UID, UIDs, diagnostic reporting), the two encryption keys 𝐾𝐸 and 𝑉, the watermarking key K 𝑤. The
output of system is the watermarked encrypted imaging. In the decoding level, i.e. decryption/extraction
process, we decrypt the watermarked encrypted imaging with the key 𝐾𝐸. Then, we extract 𝐾 𝑤 𝑒𝑥
and the
entropy 𝐻𝐼 𝑒𝑥
, we verify the integrity of the imaging with the verification of the decrypted image entropy 𝐻𝐼 𝑑𝑒𝑐
and the extracted entropy 𝐻𝐼 𝑒𝑥
.
010int10int exdec I
n
I
n
HH
(12)
Finally, if the imaging integrity is verified. we extract the information of patient and the medical
reporting with the key 𝐾 𝑤 𝑒𝑥
. To noted that we have a good configuration for 𝑛 = 2. i.e. we use two digits
after comma.
Figure 5. Proposed combined Watermarking/Encryption scheme
Robust Hiding is illustrating in Section 3.2. LSB Insertion is illustrating in Section 3.3. encrypt is
illustrating in Section 3.1. Binary coder is the binary representation of ASCII code of each character. Kw is
the watermarking key. KE and V are the key of encryption
The watermarking/encryption schema proposed in this paper must be can merged with JPEG
compression, because it’s very used in standard DICOM, we describe the merging of JPEG schema, which is
given in paper [15], as following steps: a) we hide the report and the information of patient in imaging as
illustrated in Section 3.2, but, without inverse transformation IDCT. b) We quantify the watermarked
coefficients of DCT with a uniform quantization. c) we insert the entropy of quantified coefficients and the
𝐾 𝑤 key in the quantified imaging. d) we encode the watermarked quantified imaging with Huffman coding.
e) We encrypt the Huffman coding data, as illustrated in Section 3.1, with 𝑛1 = 0.
3. IMPLEMENTATION ON EMBEDDED SYSTEMS
To implement the proposed schema on embedded system one must pass through the next three
steps: In the first step, we implement the algorithm in Matlab tool. In the second step, after fixation of
parameter and optimization of program, we convert the Matlab function to C/C++ code. In the final step, we
use this function in the main project which allows the crypto-watermarking of medical images.
3.1. Build C/C++ functions from m-files functions
In this part, we convert m-files to C/C++ static library code using the application Coder of Matlab
tool. In the first, we need to simplify the MATLAB code to be adapted to the embedded system, therefore,
we must fixe all type of input and output variable of each function.
7. IJECE ISSN: 2088-8708
New Watermarking/Encryption Method for Medical Images Full Protection .... (Mohamed Boussif)
3391
3.2. Implementation on C6416 DSK card
To implement the proposed system on card DSK C6416, we use CCS tool which allow debug
C/C++ code in the card. In the first, we prepare the DSP/BIOS for real time data exchange (RTDX) which
consist to allocate the memory in off-ship We use a first program which allow the initialization of all variable
to zeros, then, charge the off-chip the imaging and the keys.
3.3. Implementation on Android OS
To implement the proposed system on Android OS, we use Android NDK (in Android Studio) tool
which allows implement the parts C/C++ builder in Section IV.B on our application.
4. EXPERIMENTAL RESULTS AND SECURITY ANALYZE
Experiments were conducted on four sets of medical images of different modality, sizes and depth:
Magnetic Resonance imaging (modality MR) of 560×560 pixels and 12-bit depth, Computed Tomography
imaging (modality CT) of 512x512 pixels and 12-bit depth, Digital Radiography imaging (modality DX) of
2022x1736 pixels and 14-bit depth, Secondary Capture imaging (modality SC) of 224x176 pixels and 16-bit
depth. Some samples of our dataset are given in Figure 6. Let us recall that for images encoded on 8, 12, 14
or 16 bits, our proposed watermarking/encryption system manipulates blocks of 8x8 pixels.
Figure 6. Samples of our images test sets. (a) Modality MR imaging. (b) Modality CT imaging. (c) Modality
DX imaging. (d) Modality SC imaging
We decided to use the peak signal to noise ratio (PSNR) in order to measure the distortion between
an imaging 𝐼 and its watermarked imaging 𝐼 𝑤𝑑𝑒𝑐:
L
k
wdecwdec
dep
wdec
kIkI
L
IIMSE
MSE
LogIIPSNR
1
2
2
10
1
,
12
10,
(13)
where 𝐿 corresponds to the number of pixels of the image 𝐼, and 𝑑𝑒𝑝 corresponds to its depth. We decided to
use the entropy 𝐻 to measure the pixel variation for an image 𝐼, for a source, which is a discrete random
variable 𝐼 with 2 𝑑𝑒𝑝
symbols, each symbol 𝐼𝑖 has a probability 𝑃𝑖 to appear, the entropy 𝐻 of the 𝐼 source is
defined as:
n
i ii PPIH 1 2log
(14)
We use the correlation coefficients 𝑁𝐶 to measure the distortion between the watermark 𝑤 and the
extracted watermark 𝑤𝑒𝑥:
exww
ex
ex
wwCov
wwNC
,
,
(15)
where 𝐶𝑜v(𝑥, 𝑦) the covariance of 𝑥 and 𝑦. 𝜎𝑥 is the standard deviation of 𝑥. Where 𝑥̅ the average value of 𝑥.
𝑝𝑖 the probability of 𝑥𝑖.
(a) (b) (c) (d)
8. ISSN: 2088-8708
IJECE Vol. 7, No. 6, December 2017 : 3385 – 3394
3392
Figure 7. watermarking and encryption simulation. (a) Watermarking robustness to JPEG compression for
f = 1 and f = 1.5. (b)Original imaging histogram. (d)encrypted imaging histogram for n1 = 2. (c)
Watermarking robustness to “salt & pepper” noise for f = 0.1
n
i
ii
x
i
N
i
i
pxxE
xExE
yyxx
N
yxCov
1
22
_
1
_
1
,
(16)
The resistance of the proposed technique to differential attack is evaluated by comparing ciphered
images obtained by the encryption of two minimally different plain images. It is desired that such ciphered
images are substantially different. This is measured by the number of pixels change rate (NPCR) and the
unified average changing intensity (UACI), which are defined by:
𝑁𝑃𝐶𝑅 = 100
∑ 𝐷 𝑖,𝑗
𝑊×𝐻
𝑈𝐴𝐶𝐼 = 100 ∑
|𝐼1(𝑖,𝑗)−𝐼2(𝑖,𝑗)|
𝑊×𝐻×(2 𝑑−1)
(17)
where I1 and I2 are the two ciphered images whose plain images have only one pixel difference; the
grayscale values of the pixels at position (i,j) of I1 and I2 are, respectively, denote das I1(i,j) and I2(i,j); W
and H correspond to the width and the height of the ciphered image, respectively; D(i,j) equal to 1 if I1(i,j)
deferent to I2(i,j), 0 otherwise.
As shown in Figure 8. (b), (d), (f), (h) the visual aspect of images, in encrypted domain, is
completely noisy. For verify the sensitivity of the encryption method, we decrypt the imaging test which
minimally wrong key, we conclude that unique the encryption key can decrypt the encrypted images, also, in
Figure 7(d), the histogram of encrypted imaging shows that the distribution of color in encrypted imaging is
identical, which suggests that a statistical analysis would not be effective for the evaluation of the original
images content. As shown in Table 1, the entropy, NPCR, and UACI of encrypted imaging is approach to
8,100, and 33, respectively. Therefore, the proposed encryption system can robust to entropy and differential
attack.
2 4 6 8
0.7
0.8
0.9
1
1.1
CR
NC f=1 f=1.5
0 0.002 0.004 0.006 0.008 0.01
0.85
0.9
0.95
1
1.05
Density
NC
(a)
(c)
0 200 400 600 800 1000 1200 1400 1600 1800
0
100
200
300
400
500
600
0 500 1000 1500 2000 2500 3000 3500 4000 4500
0
10
20
30
40
50
60
70
80
90
(b)
(d)
Histogra
Histogra
9. IJECE ISSN: 2088-8708
New Watermarking/Encryption Method for Medical Images Full Protection .... (Mohamed Boussif)
3393
As shown in Figure 8 (a), (c), (e), (g), the watermark is imperceptible and the average PSNR of
watermarked images is equal to 55 (see Table 1). The robustness of the proposed watermarking system is
evaluated by applied the JPEG compression and Noise attack, as shows in Figure 7(a), the normalized
correlation coefficient NC close to 1, so, the watermarked compressed imaging MR is robust against JPEG
compression for ratio factors equal to 6.4 the quality factor equals to 16%. Also, we test the proposed
algorithm with noise attack, we conclude that the proposed robust to this type of attack (see Figure 7(c)).
Figure 8. (a), (c), (e) and (g) are the watermarked images for the images in figure 6(a), (b), (c), (d),
respectively, (b), (d) (f) and (h) are the encrypted watermarked images for the images in figure 6(a), (b), (c),
(d), respectively
Table 1. Experimental results obtained with f=0.5, Capacity equal to 0.125 and n1 = 2
Samples a b c d
PSNR of watermarked images 60.1307 41.6410 42.2410 59.3536
Original images entropy 3.1730 4.8459 4.4224 1.4572
Encrypted watermarked images entropy 7.9417 7.7238 7.0066 7.7140
NPCR 99.8964 99.9676 99.8478 99.8796
UACI 33.4545 33.7878 33.7686 33.1889
PSNR of semi Restituted imaging 83.3452 61.9835 63.6543 80.6547
The fragile watermarking method, which assure integrity in encrypted domain. is validate by applied
on the ciphered imaging more attacks such as the clopping, add noise. Table 2 shows that the proposed
watermarking is fragile center all attacks type.
Table 2. Integrity test of encrypted watermarked imaging with usual attacks
Attack
Extracted
entropy
Decrypted watermarked
imaging entropy
𝑎𝑏𝑠 𝑜𝑓 𝑖𝑛𝑡(100 × 𝐻𝐼 𝑑𝑒𝑐
)
− 𝑖𝑛𝑡 (100 × 𝐻𝐼 𝑜𝑟𝑖𝑔𝑖𝑛𝑎𝑙
)
Integrity
check
Without 1.4572 1.4578 0 Pass
Compression (90%) - 4.4520 - Not
Filtering - 3.4516 - Not
Contrast adjustment - 3.1172 - Not
Clopping (1/8) 1.4572 2.1172 118 Not
Rotation (90°) - 4.4578 - Not
Clopping (1/(512*512)) 1.4572 2.1114 66 Not
5. COMPARISON WITH OTHERS PAPERS
In this section, we compare the proposed with Baiying Lei et al [16] where the authors have
proposed a reversible watermarking scheme for medical images using the differential evolution. Rayachoti
Eswaraiah et al [17] where the authors have proposed a robust medical image watermarking technique. G.
Coatrieux et al [18] where the authors have proposed a watermarking method based on image moment
signature. Ali Al-Haj et al [19] where the authors have proposed an encryption algorithm for secured medical
image transmission. Ekta Walia et al [20] where the authors have proposed a fragile and blind watermarking
technique. Bouslimi et al [6] where the authors have proposed a joint encryption/watermarking system. Table
3 shows the comparaison of the proposed method and others method in term of availability, reliability, and
confidentiality. therefore, the proposed is more secured than [6] and [16]-[20].
(a) (b) (c) (d)
(e) (f) (g) (h)
10. ISSN: 2088-8708
IJECE Vol. 7, No. 6, December 2017 : 3385 – 3394
3394
Table 3. Comparison with others proposed methods
METHODS Availability in embedded
system (m-Health)
REAL TIME RELIABILITY CONFIDENTIALITY
Baiying Lei et al [16]
Rayachoti Eswaraiah et al [17]
G. Coatrieux et al [18]
Ali Al-Haj et al [19]
Ekta Walia et al [20]
Bouslimi et al [6]
Proposed
6. CONCLUSION
In this paper, a novel watermarking/encryption system for full security of medical images dedicated
to embedded system, which can be used in m-Health, have been proposed for goal safe transferring of
imaging medical, the experimental results testify the good security provided by our algorithm, therefore, it
has a good perspective for medical images application and can be used in m-Health.
REFERENCES
[1] Constantinescu L. and J. Kim, “SparkMed:A Framework for Dynamic Integration of Multimedia Medical Data into
Distributed m-Health Systems,” IEEE Transactions on Information Technology in Biomedicine, pp. 40-52, 2011.
[2] Z. Qian and X. Zhang, “Reversible Data Hiding in Encrypted Images with Distributed Source Encoding,” in IEEE
Transactions on Circuits and Systems for Video Technology, pp. 1-13, 2015.
[3] S. Z. Chaudhari, et al., “Secure Dissemination and Protection of Multispectral Images Using Crypto-
Watermarking,” in IEEE journal of selected topics in applied earth observations and remote sensing, pp. 1-7, 2015.
[4] X. Cao, et al., “High Capacity Reversible Data Hiding in Encrypted Images by Patch-Level Sparse
Representation,” in IEEE transactions on cybernetics, pp. 1-12, 2015.
[5] A. V. Subramanyam, et al., “Robust Watermarking of Compressed and Encrypted JPEG2000 Images,” in IEEE
transactions on multimedia, vol/issue: 14(3), pp. 703-716, 2012.
[6] D. Bouslimi, et al., “A Joint Encryption/Watermarking System for Verifying the Reliability of Medical Images,” in
IEEE transactions on information technology in biomedicine, vol/issue: 16(5), pp. 891-899, 2012.
[7] H. Suryavanshi, et al., “Digital Image Watermarking in Wavelet Domain,” in International Journal of Electrical
and Computer Engineering (IJECE), vol/issue: 3(1), 2014.
[8] N. Sethi and S. Vijay, “A New Cryptographic Strategy for Digital Images,” in International Journal of Electrical
and Computer Engineering (IJECE), vol/issue: 4(3), 2014.
[9] B. H. Prasetio, et al., “Image Encryption using Simple Algorithm on FPGA,” in TELKOMNIKA
(Telecommunication Computing Electronics and Control), vol/issue: 13(4), 2015.
[10] G. R. N. Kumari and S. Maruthuperumal, “Normalized Image Watermarking Scheme Using Chaotic System,” in
International Journal of Information and Network Security (IJINS), vol/issue: 1(4), 2012.
[11] C. Fei, et al., “Analysis and Design of Secure Watermark-Based Authentication Systems,” in IEEE transactions on
information forensics and security, pp. 43-55, 2006.
[12] X. Li, et al., “Image Integrity Authentication Scheme Based on Fixed Point Theory,” in IEEE transactions on
image processing, pp. 632-645, 2015.
[13] O. Dabeer, et al., “Detection of Hiding in the Least Significant Bit,” in IEEE transactions on signal processing, pp.
3046-3058, 2004.
[14] Y. S. Chen and R. Z. Wang, “Steganalysis of Reversible Contrast Mapping Watermarking,” in IEEE signal
processing letters, pp. 125-128, 2009.
[15] W. Luo, et al., “JPEG Error Analysis and Its Applications to Digital Image Forensics,” in IEEE transactions on
information forensics and security, pp. 480-491, 2010.
[16] B. Lei, et al., “Reversible watermarking scheme for medical image based on differential evolution,” in ELSEVIER
Expert Systems with Applications, pp. 3178–3188, 2014.
[17] R. Eswaraiah and E. S. Reddy, “Robust medical image watermarking technique for accurate detection of tampers
inside region of interest and recovering original region of interest,” in IET Image Processing, pp. 615–625, 2015.
[18] G. Coatrieux, et al., “A Watermarking Based Medical Image Integrity Control System and an Image Moment
Signature for Tampering Characterization,” in IEEE transactions on information technology in biomedicine, pp. 1-
11, 2013.
[19] A. Al-Haj, et al., ”Crypto-based algorithms for secured medical image transmission,” in IET Information Security,
pp. 365–373, 2015.
[20] E. Walia and A. Suneja, “Fragile and blind watermarking technique based on Weber’s law for medical image
authentication,” in IET Computer Vision, pp. 9–19, 2013.