This document proposes an improved spread spectrum watermarking technique to withstand geometric deformations. The technique uses Direct Sequence Code Division Multiple Access (DS-CDMA) and embeds the watermark by modulating a pseudo-random noise sequence. It normalizes the cover image before embedding to achieve robustness against geometric attacks like rotation and scaling. The watermark is extracted using correlation between the watermarked image and pseudo-random noise sequence. Experimental results show the technique achieves a low bit error rate under various geometric attacks when the watermark strength is increased.
Labview with dwt for denoising the blurred biometric imagesijcsa
In this paper, biometric blurred image (fingerprint) denoising are presented and investigated by using
LabVIEW applications , It is blurred and corrupted with Gaussian noise. This work is proposed
algorithm that has used a discrete wavelet transform (DWT) to divide the image into two parts, this will
be increasing the manipulation speed of biometric images that are of the big sizes. This work has included
two tasks ; the first designs the LabVIEW system to calculate and present the approximation coefficients,
by which the image's blur factor reduced to minimum value according to the proposed algorithm. The
second task removes the image's noise by calculated the regression coefficients according to Bayesian-
Shrinkage estimation method.
Image Splicing Detection involving Moment-based Feature Extraction and Classi...IDES Editor
In the modern age, the digital image has taken
the place of the original analog photograph, and the forgery
of digital images has become increasingly easy, and harder
to detect. Image splicing is the process of making a
composite picture by cutting and joining two or more
photographs. An approach to efficient image splicing
detection is proposed here. The spliced image often
introduces a number of sharp transitions such as lines,
edges and corners. Phase congruency is a sensitive measure
of these sharp transitions and is hence proposed as a
feature for splicing detection. Statistical moments of
characteristic functions of wavelet sub-bands have been
examined to detect the differences between the authentic
images and spliced images. Image splicing detection can be
treated as a two-class pattern recognition problem, which
builds the model using moment features and some other
parameters extracted from the given test image. Artificial
neural network (ANN) is chosen as a classifier to train and
test the given images.
Novel DCT based watermarking scheme for digital imagesIDES Editor
There is an ever growing interest in copyright
protection of multimedia content, thus digital
watermarking techniques are widely practiced. Due to
the internet connectivity and digital libraries the
research interest of protecting digital content
watermarking is extensively researched. In this paper
we present a novel watermark generation scheme
based on the histogram of the image and apply it to the
original image in the transform(DCT) domain. Further
we study the performance of the watermark against
some common attacks that can take place with images.
Experimental results show that the embedded
watermark is imperceptible and image quality is not
degraded.
Blur Parameter Identification using Support Vector MachineIDES Editor
This paper presents a scheme to identify the blur
parameters using support vector machine (SVM) Multiclass
approach has been used to classify the length of motion blur
and sigma parameter of atmospheric blur. Different models
of SVM have been constructed to classify the parameters.
Experimental results show the robustness of the proposed
approach to classify blur parameters.
Non-Blind Deblurring Using Partial Differential Equation MethodEditor IJCATR
In this paper, a new idea for two dimensional image deblurring algorithm is introduced which uses basic concepts of PDEs... The various methods to estimate the degradation function (PSF is known in prior called non-blind deblurring) for use in restoration are observation, experimentation and mathematical modeling. Here, PDE based mathematical modeling is proposed to model the degradation and recovery process. Several restoration methods such as Weiner Filtering, Inverse Filtering [1], Constrained Least Squares, and Lucy -Richardson iteration remove the motion blur either using Fourier Transformation in frequency domain or by using optimization techniques. The main difficulty with these methods is to estimate the deviation of the restored image from the original image at individual points that is due to the mechanism of these methods as processing in frequency domain .Another method, the travelling wave de-blurring method is a approach that works in spatial domain.PDE type of observation model describes well several physical mechanisms, such as relative motion between the camera and the subject (motion blur), bad focusing (defocusing blur), or a number of other mechanisms which are well modeled by a convolution. In last PDE method is compared with the existing restoration techniques such as weiner filters, median filters [2] and the results are compared on the basis of calculated PSNR for various noises
The objective of this work is to propose an image
denoising technique and compare it with image denoising
using ridgelets. The proposed method uses slantlet transform
instead of wavelets in ridgelet transform. Experimental result
shows that the proposed method is more effective than ridgelets
in noise removal. The proposed method is effective in
compressing images while preserving edges.
Labview with dwt for denoising the blurred biometric imagesijcsa
In this paper, biometric blurred image (fingerprint) denoising are presented and investigated by using
LabVIEW applications , It is blurred and corrupted with Gaussian noise. This work is proposed
algorithm that has used a discrete wavelet transform (DWT) to divide the image into two parts, this will
be increasing the manipulation speed of biometric images that are of the big sizes. This work has included
two tasks ; the first designs the LabVIEW system to calculate and present the approximation coefficients,
by which the image's blur factor reduced to minimum value according to the proposed algorithm. The
second task removes the image's noise by calculated the regression coefficients according to Bayesian-
Shrinkage estimation method.
Image Splicing Detection involving Moment-based Feature Extraction and Classi...IDES Editor
In the modern age, the digital image has taken
the place of the original analog photograph, and the forgery
of digital images has become increasingly easy, and harder
to detect. Image splicing is the process of making a
composite picture by cutting and joining two or more
photographs. An approach to efficient image splicing
detection is proposed here. The spliced image often
introduces a number of sharp transitions such as lines,
edges and corners. Phase congruency is a sensitive measure
of these sharp transitions and is hence proposed as a
feature for splicing detection. Statistical moments of
characteristic functions of wavelet sub-bands have been
examined to detect the differences between the authentic
images and spliced images. Image splicing detection can be
treated as a two-class pattern recognition problem, which
builds the model using moment features and some other
parameters extracted from the given test image. Artificial
neural network (ANN) is chosen as a classifier to train and
test the given images.
Novel DCT based watermarking scheme for digital imagesIDES Editor
There is an ever growing interest in copyright
protection of multimedia content, thus digital
watermarking techniques are widely practiced. Due to
the internet connectivity and digital libraries the
research interest of protecting digital content
watermarking is extensively researched. In this paper
we present a novel watermark generation scheme
based on the histogram of the image and apply it to the
original image in the transform(DCT) domain. Further
we study the performance of the watermark against
some common attacks that can take place with images.
Experimental results show that the embedded
watermark is imperceptible and image quality is not
degraded.
Blur Parameter Identification using Support Vector MachineIDES Editor
This paper presents a scheme to identify the blur
parameters using support vector machine (SVM) Multiclass
approach has been used to classify the length of motion blur
and sigma parameter of atmospheric blur. Different models
of SVM have been constructed to classify the parameters.
Experimental results show the robustness of the proposed
approach to classify blur parameters.
Non-Blind Deblurring Using Partial Differential Equation MethodEditor IJCATR
In this paper, a new idea for two dimensional image deblurring algorithm is introduced which uses basic concepts of PDEs... The various methods to estimate the degradation function (PSF is known in prior called non-blind deblurring) for use in restoration are observation, experimentation and mathematical modeling. Here, PDE based mathematical modeling is proposed to model the degradation and recovery process. Several restoration methods such as Weiner Filtering, Inverse Filtering [1], Constrained Least Squares, and Lucy -Richardson iteration remove the motion blur either using Fourier Transformation in frequency domain or by using optimization techniques. The main difficulty with these methods is to estimate the deviation of the restored image from the original image at individual points that is due to the mechanism of these methods as processing in frequency domain .Another method, the travelling wave de-blurring method is a approach that works in spatial domain.PDE type of observation model describes well several physical mechanisms, such as relative motion between the camera and the subject (motion blur), bad focusing (defocusing blur), or a number of other mechanisms which are well modeled by a convolution. In last PDE method is compared with the existing restoration techniques such as weiner filters, median filters [2] and the results are compared on the basis of calculated PSNR for various noises
The objective of this work is to propose an image
denoising technique and compare it with image denoising
using ridgelets. The proposed method uses slantlet transform
instead of wavelets in ridgelet transform. Experimental result
shows that the proposed method is more effective than ridgelets
in noise removal. The proposed method is effective in
compressing images while preserving edges.
Image Denoising Using Wavelet TransformIJERA Editor
In this project, we have studied the importance of wavelet theory in image denoising over other traditional methods. We studied the importance of thresholding in wavelet theory and the two basic thresholding method i.e. hard and soft thresholding experimentally. We also studied why soft thresholding is preferred over hard thresholding, three types of soft thresholding (Bayes shrink, Sure shrink, Visu shrink) as well as advantages and disadvantage of each of them
VARIATION-FREE WATERMARKING TECHNIQUE BASED ON SCALE RELATIONSHIPcsandit
Most watermark methods use pixel values or coefficients as the judgment condition to embed or
extract a watermark image. The variation of these values may lead to the inaccurate condition
such that an incorrect judgment has been laid out. To avoid this problem, we design a stable
judgment mechanism, in which the outcome will not be seriously influenced by the variation.
The principle of judgment depends on the scale relationship of two pixels. From the observation
of common signal processing operations, we can find that the pixel value of processed image
usually keeps stable unless an image has been manipulated by cropping attack or halftone
transformation. This can greatly help reduce the modification strength from image processing
operations. Experiment results show that the proposed method can resist various attacks and
keep the image quality friendly.
An Application of Second Generation Wavelets for Image Denoising using Dual T...IDES Editor
The lifting scheme of the discrete wavelet transform
(DWT) is now quite well established as an efficient technique
for image denoising. The lifting scheme factorization of
biorthogonal filter banks is carried out with a linear-adaptive,
delay free and faster decomposition arithmetic. This adaptive
factorization is aimed to achieve a well transparent, more
generalized, complexity free fast decomposition process in
addition to preserve the features that an ordinary wavelet
decomposition process offers. This work is targeted to get
considerable reduction in computational complexity and power
required for decomposition. The hard striking demerits of
DWT structure viz., shift sensitivity and poor directionality
had already been proven to be washed out with an emergence
of dual tree complex wavelet (DT-CWT) structure. The well
versed features of DT-CWT and robust lifting scheme are
suitably combined to achieve an image denoising with prolific
rise in computational speed and directionality, also with a
desirable drop in computation time, power and complexity of
algorithm compared to all other techniques.
FINGERPRINTS IMAGE COMPRESSION BY WAVE ATOMScsandit
The fingerprint images compression based on geometric transformed presents important
research topic, these last year’s many transforms have been proposed to give the best
representation to a particular type of image “fingerprint image”, like classics wavelets and
wave atoms. In this paper we shall present a comparative study between this transforms, in
order to use them in compression. The results show that for fingerprint images, the wave atom
offers better performance than the current transform based compression standard. The wave
atoms transformation brings a considerable contribution on the compression of fingerprints
images by achieving high values of ratios compression and PSNR, with a reduced number of
coefficients. In addition, the proposed method is verified with objective and subjective testing.
Improved anti-noise attack ability of image encryption algorithm using de-noi...TELKOMNIKA JOURNAL
Information security is considered as one of the important issues in the information age used to preserve the secret information through out transmissions in practical applications. With regard to image encryption, a lot of schemes related to information security were applied. Such approaches might be categorized into 2 domains; domain frequency and domain spatial. The presented work develops an encryption technique on the basis of conventional watermarking system with the use of singular value decomposition (SVD), discrete cosine transform (DCT), and discrete wavelet transform (DWT) together, the suggested DWT-DCT-SVD method has high robustness in comparison to the other conventional approaches and enhanced approach for having high robustness against Gaussian noise attacks with using denoising approach according to DWT. MSE in addition to the peak signal-to-noise ratio (PSNR) specified the performance measures which are the base of this study’s results, as they are showing that the algorithm utilized in this study has high robustness against Gaussian noise attacks.
In this paper a PDE based hybrid method for image denoising is introduced. The method is a bi-stage filter with anisotropic diffusion followed by wavelet based bayesian shrinkage. Here efficient denoising is achieved by reducing the convergence time of anisotropic diffusion.
Image Compression Using Wavelet Packet TreeIDES Editor
Methods of compressing data prior to storage and
transmission are of significant practical and commercial
interest. The necessity in image compression continuously
grows during the last decade. The image compression includes
transform of image, quantization and encoding. One of the
most powerful and perspective approaches in this area is
image compression using discrete wavelet transform. This
paper describes a new approach called as wavelet packet tree
for image compression. It constructs the best tree on the basis
of Shannon entropy. This new approach checks the entropy of
decomposed nodes (child nodes) with entropy of node, which
has been decomposed (parent node) and takes the decision of
decomposition of a node. In addition, authors have proposed
an adaptive thresholding for quantization, which is based on
type of wavelet used and nature of image. Performance of the
proposed algorithm is compared with existing wavelet
transform algorithm in terms of percentage of zeros and
percentage of energy retained and signals to noise ratio.
IOSR Journal of Electronics and Communication Engineering(IOSR-JECE) is an open access international journal that provides rapid publication (within a month) of articles in all areas of electronics and communication engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in electronics and communication engineering. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Removing noise from the Medical image is still a challenging problem for researchers. Noise added is not easy to remove from the images. There have been several published algorithms and each approach has its assumptions, advantages, and limitations. This paper summarizes the major techniques to denoise the medical images and finds the one is better for image denoising. We can conclude that the Multiwavelet technique with Soft threshold is the best technique for image denoising.
FINGERPRINTS IMAGE COMPRESSION BY WAVE ATOMScsandit
The fingerprint images compression based on geometric transformed presents important
research topic, these last year’s many transforms have been proposed to give the best
representation to a particular type of image “fingerprint image”, like classics wavelets and
wave atoms. In this paper we shall present a comparative study between this transforms, in
order to use them in compression. The results show that for fingerprint images, the wave atom
offers better performance than the current transform based compression standard. The wave
atoms transformation brings a considerable contribution on the compression of fingerprints
images by achieving high values of ratios compression and PSNR, with a reduced number of
coefficients. In addition, the proposed method is verified with objective and subjective testing.
Image Denoising Using Wavelet TransformIJERA Editor
In this project, we have studied the importance of wavelet theory in image denoising over other traditional methods. We studied the importance of thresholding in wavelet theory and the two basic thresholding method i.e. hard and soft thresholding experimentally. We also studied why soft thresholding is preferred over hard thresholding, three types of soft thresholding (Bayes shrink, Sure shrink, Visu shrink) as well as advantages and disadvantage of each of them
VARIATION-FREE WATERMARKING TECHNIQUE BASED ON SCALE RELATIONSHIPcsandit
Most watermark methods use pixel values or coefficients as the judgment condition to embed or
extract a watermark image. The variation of these values may lead to the inaccurate condition
such that an incorrect judgment has been laid out. To avoid this problem, we design a stable
judgment mechanism, in which the outcome will not be seriously influenced by the variation.
The principle of judgment depends on the scale relationship of two pixels. From the observation
of common signal processing operations, we can find that the pixel value of processed image
usually keeps stable unless an image has been manipulated by cropping attack or halftone
transformation. This can greatly help reduce the modification strength from image processing
operations. Experiment results show that the proposed method can resist various attacks and
keep the image quality friendly.
An Application of Second Generation Wavelets for Image Denoising using Dual T...IDES Editor
The lifting scheme of the discrete wavelet transform
(DWT) is now quite well established as an efficient technique
for image denoising. The lifting scheme factorization of
biorthogonal filter banks is carried out with a linear-adaptive,
delay free and faster decomposition arithmetic. This adaptive
factorization is aimed to achieve a well transparent, more
generalized, complexity free fast decomposition process in
addition to preserve the features that an ordinary wavelet
decomposition process offers. This work is targeted to get
considerable reduction in computational complexity and power
required for decomposition. The hard striking demerits of
DWT structure viz., shift sensitivity and poor directionality
had already been proven to be washed out with an emergence
of dual tree complex wavelet (DT-CWT) structure. The well
versed features of DT-CWT and robust lifting scheme are
suitably combined to achieve an image denoising with prolific
rise in computational speed and directionality, also with a
desirable drop in computation time, power and complexity of
algorithm compared to all other techniques.
FINGERPRINTS IMAGE COMPRESSION BY WAVE ATOMScsandit
The fingerprint images compression based on geometric transformed presents important
research topic, these last year’s many transforms have been proposed to give the best
representation to a particular type of image “fingerprint image”, like classics wavelets and
wave atoms. In this paper we shall present a comparative study between this transforms, in
order to use them in compression. The results show that for fingerprint images, the wave atom
offers better performance than the current transform based compression standard. The wave
atoms transformation brings a considerable contribution on the compression of fingerprints
images by achieving high values of ratios compression and PSNR, with a reduced number of
coefficients. In addition, the proposed method is verified with objective and subjective testing.
Improved anti-noise attack ability of image encryption algorithm using de-noi...TELKOMNIKA JOURNAL
Information security is considered as one of the important issues in the information age used to preserve the secret information through out transmissions in practical applications. With regard to image encryption, a lot of schemes related to information security were applied. Such approaches might be categorized into 2 domains; domain frequency and domain spatial. The presented work develops an encryption technique on the basis of conventional watermarking system with the use of singular value decomposition (SVD), discrete cosine transform (DCT), and discrete wavelet transform (DWT) together, the suggested DWT-DCT-SVD method has high robustness in comparison to the other conventional approaches and enhanced approach for having high robustness against Gaussian noise attacks with using denoising approach according to DWT. MSE in addition to the peak signal-to-noise ratio (PSNR) specified the performance measures which are the base of this study’s results, as they are showing that the algorithm utilized in this study has high robustness against Gaussian noise attacks.
In this paper a PDE based hybrid method for image denoising is introduced. The method is a bi-stage filter with anisotropic diffusion followed by wavelet based bayesian shrinkage. Here efficient denoising is achieved by reducing the convergence time of anisotropic diffusion.
Image Compression Using Wavelet Packet TreeIDES Editor
Methods of compressing data prior to storage and
transmission are of significant practical and commercial
interest. The necessity in image compression continuously
grows during the last decade. The image compression includes
transform of image, quantization and encoding. One of the
most powerful and perspective approaches in this area is
image compression using discrete wavelet transform. This
paper describes a new approach called as wavelet packet tree
for image compression. It constructs the best tree on the basis
of Shannon entropy. This new approach checks the entropy of
decomposed nodes (child nodes) with entropy of node, which
has been decomposed (parent node) and takes the decision of
decomposition of a node. In addition, authors have proposed
an adaptive thresholding for quantization, which is based on
type of wavelet used and nature of image. Performance of the
proposed algorithm is compared with existing wavelet
transform algorithm in terms of percentage of zeros and
percentage of energy retained and signals to noise ratio.
IOSR Journal of Electronics and Communication Engineering(IOSR-JECE) is an open access international journal that provides rapid publication (within a month) of articles in all areas of electronics and communication engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in electronics and communication engineering. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Removing noise from the Medical image is still a challenging problem for researchers. Noise added is not easy to remove from the images. There have been several published algorithms and each approach has its assumptions, advantages, and limitations. This paper summarizes the major techniques to denoise the medical images and finds the one is better for image denoising. We can conclude that the Multiwavelet technique with Soft threshold is the best technique for image denoising.
FINGERPRINTS IMAGE COMPRESSION BY WAVE ATOMScsandit
The fingerprint images compression based on geometric transformed presents important
research topic, these last year’s many transforms have been proposed to give the best
representation to a particular type of image “fingerprint image”, like classics wavelets and
wave atoms. In this paper we shall present a comparative study between this transforms, in
order to use them in compression. The results show that for fingerprint images, the wave atom
offers better performance than the current transform based compression standard. The wave
atoms transformation brings a considerable contribution on the compression of fingerprints
images by achieving high values of ratios compression and PSNR, with a reduced number of
coefficients. In addition, the proposed method is verified with objective and subjective testing.
Performance Analysis of Spatial and Frequency Domain Multiple Data Embedding ...CSCJournals
Data hiding is an age-old technique used to hide data in an image. Several attacks are prevalent to hack the data hidden inside the image. Considerable researches are going on in this area to protect the hidden data from unauthorized access. The current work is focused towards studying the behavior of Spatial and Frequency Domain Multiple data embedding techniques towards noise prone channels enabling the user to select an optimal embedding technique. The Performance of the above techniques is also focused towards multiple embedded data inside a single cover image. The robustness of the watermark is tested by incorporating several attacks and testing the watermark strength.
WAVELET BASED AUTHENTICATION/SECRET TRANSMISSION THROUGH IMAGE RESIZING (WA...sipij
The paper is aimed for a wavelet based steganographic/watermarking technique in frequency domain
termed as WASTIR for secret message/image transmission or image authentication. Number system
conversion of the secret image by changing radix form decimal to quaternary is the pre-processing of the
technique. Cover image scaling through inverse discrete wavelet transformation with false Horizontal and
vertical coefficients are embedded with quaternary digits through hash function and a secret key.
Experimental results are computed and compared with the existing steganographic techniques like WTSIC,
Yuancheng Li’s Method and Region-Based in terms of Mean Square Error (MSE), Peak Signal to Noise
Ratio (PSNR) and Image Fidelity (IF) which show better performances in WASTIR.
IMAGE CODING THROUGH ZTRANSFORM WITH LOW ENERGY AND BANDWIDTH (IZEB) cscpconf
In this paper a Z-transform based image coding technique has been proposed. The techniques uses energy efficient and low bandwidth based invisible data embedding with a minimal
computational complexity. In this technique near about half the bandwidth is required compared to the traditional Z–transform while transmitting the multimedia content such as images through network.
Image coding through ztransform with low energy and bandwidth (izeb)csandit
In this paper a Z-transform based image coding technique has been proposed. The techniques
uses energy efficient and low bandwidth based invisible data embedding with a minimal
computational complexity. In this technique near about half the bandwidth is required
compared to the traditional Z–transform while transmitting the multimedia content such as
images through network.
IMAGE DENOISING BY MEDIAN FILTER IN WAVELET DOMAINijma
The details of an image with noise may be restored by removing noise through a suitable image de-noising
method. In this research, a new method of image de-noising based on using median filter (MF) in the
wavelet domain is proposed and tested. Various types of wavelet transform filters are used in conjunction
with median filter in experimenting with the proposed approach in order to obtain better results for image
de-noising process, and, consequently to select the best suited filter. Wavelet transform working on the
frequencies of sub-bands split from an image is a powerful method for analysis of images. According to this
experimental work, the proposed method presents better results than using only wavelet transform or
median filter alone. The MSE and PSNR values are used for measuring the improvement in de-noised
images.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Power System State Estimation - A ReviewIDES Editor
The aim of this article is to provide a comprehensive
survey on power system state estimation techniques. The
algorithms used for finding the system states under both static
and dynamic state estimations are discussed in brief. The
authors are opinion that the scope of pursuing research in the
area of state estimation with PMU and SCADA measurements
is the state of the art and timely.
Artificial Intelligence Technique based Reactive Power Planning Incorporating...IDES Editor
Reactive Power Planning is a major concern in the
operation and control of power systems This paper compares
the effectiveness of Evolutionary Programming (EP) and
New Improved Differential Evolution (NIMDE) to solve
Reactive Power Planning (RPP) problem incorporating
FACTS Controllers like Static VAR Compensator (SVC),
Thyristor Controlled Series Capacitor (TCSC) and Unified
power flow controller (UPFC) considering voltage stability.
With help of Fast Voltage Stability Index (FVSI), the critical
lines and buses are identified to install the FACTS controllers.
The optimal settings of the control variables of the generator
voltages,transformer tap settings and allocation and parameter
settings of the SVC,TCSC,UPFC are considered for reactive
power planning. The test and Validation of the proposed
algorithm are conducted on IEEE 30–bus system and 72-bus
Indian system.Simulation results shows that the UPFC gives
better results than SVC and TCSC and the FACTS controllers
reduce the system losses.
Design and Performance Analysis of Genetic based PID-PSS with SVC in a Multi-...IDES Editor
Damping of power system oscillations with the help
of proposed optimal Proportional Integral Derivative Power
System Stabilizer (PID-PSS) and Static Var Compensator
(SVC)-based controllers are thoroughly investigated in this
paper. This study presents robust tuning of PID-PSS and
SVC-based controllers using Genetic Algorithms (GA) in
multi machine power systems by considering detailed model
of the generators (model 1.1). The effectiveness of FACTSbased
controllers in general and SVC-based controller in
particular depends upon their proper location. Modal
controllability and observability are used to locate SVC–based
controller. The performance of the proposed controllers is
compared with conventional lead-lag power system stabilizer
(CPSS) and demonstrated on 10 machines, 39 bus New England
test system. Simulation studies show that the proposed genetic
based PID-PSS with SVC based controller provides better
performance.
Optimal Placement of DG for Loss Reduction and Voltage Sag Mitigation in Radi...IDES Editor
This paper presents the need to operate the power
system economically and with optimum levels of voltages has
further led to an increase in interest in Distributed
Generation. In order to reduce the power losses and to improve
the voltage in the distribution system, distributed generators
(DGs) are connected to load bus. To reduce the total power
losses in the system, the most important process is to identify
the proper location for fixing and sizing of DGs. It presents a
new methodology using a new population based meta heuristic
approach namely Artificial Bee Colony algorithm(ABC) for
the placement of Distributed Generators(DG) in the radial
distribution systems to reduce the real power losses and to
improve the voltage profile, voltage sag mitigation. The power
loss reduction is important factor for utility companies because
it is directly proportional to the company benefits in a
competitive electricity market, while reaching the better power
quality standards is too important as it has vital effect on
customer orientation. In this paper an ABC algorithm is
developed to gain these goals all together. In order to evaluate
sag mitigation capability of the proposed algorithm, voltage
in voltage sensitive buses is investigated. An existing 20KV
network has been chosen as test network and results are
compared with the proposed method in the radial distribution
system.
Line Losses in the 14-Bus Power System Network using UPFCIDES Editor
Controlling power flow in modern power systems
can be made more flexible by the use of recent developments
in power electronic and computing control technology. The
Unified Power Flow Controller (UPFC) is a Flexible AC
transmission system (FACTS) device that can control all the
three system variables namely line reactance, magnitude and
phase angle difference of voltage across the line. The UPFC
provides a promising means to control power flow in modern
power systems. Essentially the performance depends on proper
control setting achievable through a power flow analysis
program. This paper presents a reliable method to meet the
requirements by developing a Newton-Raphson based load
flow calculation through which control settings of UPFC can
be determined for the pre-specified power flow between the
lines. The proposed method keeps Newton-Raphson Load Flow
(NRLF) algorithm intact and needs (little modification in the
Jacobian matrix). A MATLAB program has been developed to
calculate the control settings of UPFC and the power flow
between the lines after the load flow is converged. Case studies
have been performed on IEEE 5-bus system and 14-bus system
to show that the proposed method is effective. These studies
indicate that the method maintains the basic NRLF properties
such as fast computational speed, high degree of accuracy and
good convergence rate.
Study of Structural Behaviour of Gravity Dam with Various Features of Gallery...IDES Editor
The size and shape of opening in dam causes the
stress concentration, it also causes the stress variation in the
rest of the dam cross section. The gravity method of the analysis
does not consider the size of opening and the elastic property
of dam material. Thus the objective of study is comprises of
the Finite Element Method which considers the size of
opening, elastic property of material, and stress distribution
because of geometric discontinuity in cross section of dam.
Stress concentration inside the dam increases with the opening
in dam which results in the failure of dam. Hence it is
necessary to analyses large opening inside the dam. By making
the percentage area of opening constant and varying size and
shape of opening the analysis is carried out. For this purpose
a section of Koyna Dam is considered. Dam is defined as a
plane strain element in FEM, based on geometry and loading
condition. Thus this available information specified our path
of approach to carry out 2D plane strain analysis. The results
obtained are then compared mutually to get most efficient
way of providing large opening in the gravity dam.
Assessing Uncertainty of Pushover Analysis to Geometric ModelingIDES Editor
Pushover Analysis a popular tool for seismic
performance evaluation of existing and new structures and is
nonlinear Static procedure where in monotonically increasing
loads are applied to the structure till the structure is unable
to resist the further load .During the analysis, whatever the
strength of concrete and steel is adopted for analysis of
structure may not be the same when real structure is
constructed and the pushover analysis results are very sensitive
to material model adopted, geometric model adopted, location
of plastic hinges and in general to procedure followed by the
analyzer. In this paper attempt has been made to assess
uncertainty in pushover analysis results by considering user
defined hinges and frame modeled as bare frame and frame
with slab modeled as rigid diaphragm and results compared
with experimental observations. Uncertain parameters
considered includes the strength of concrete, strength of steel
and cover to the reinforcement which are randomly generated
and incorporated into the analysis. The results are then
compared with experimental observations.
Secure Multi-Party Negotiation: An Analysis for Electronic Payments in Mobile...IDES Editor
This paper is an attempt to base on auctions which
presents a frame work for the secure multi-party decision
protocols. In addition to the implementations which are very
light weighted, the main focus is on synchronizing security
features for avoiding agreements manipulations and reducing
the user traffic. Through this paper one can understand that
this different auction protocols on top of the frame work can
be collaborated using mobile devices. This paper present the
negotiation between auctioneer and the proffered and this
negotiation shows that multiparty security is far better than
the existing system.
Selfish Node Isolation & Incentivation using Progressive ThresholdsIDES Editor
The problems associated with selfish nodes in
MANET are addressed by a collaborative watchdog approach
which reduces the detection time for selfish nodes thereby
improves the performance and accuracy of watchdogs[1]. In
the related works they make use of credit based systems, reputation
based mechanisms, pathrater and watchdog mechanism
to detect such selfish nodes. In this paper we follow an approach
of collaborative watchdog which reduces the detection
time for selfish nodes and also involves the removal of such
selfish nodes based on some progressively assessed thresholds.
The threshold gives the nodes a chance to stop misbehaving
before it is permanently deleted from the network.
The node passes through several isolation processes before it
is permanently removed. Another version of AODV protocol
is used here which allows the simulation of selfish nodes in
NS2 by adding or modifying log files in the protocol.
Various OSI Layer Attacks and Countermeasure to Enhance the Performance of WS...IDES Editor
Wireless sensor networks are networks having non
wired infrastructure and dynamic topology. In OSI model each
layer is prone to various attacks, which halts the performance
of a network .In this paper several attacks on four layers of
OSI model are discussed and security mechanism is described
to prevent attack in network layer i.e wormhole attack. In
Wormhole attack two or more malicious nodes makes a covert
channel which attracts the traffic towards itself by depicting a
low latency link and then start dropping and replaying packets
in the multi-path route. This paper proposes promiscuous mode
method to detect and isolate the malicious node during
wormhole attack by using Ad-hoc on demand distance vector
routing protocol (AODV) with omnidirectional antenna. The
methodology implemented notifies that the nodes which are
not participating in multi-path routing generates an alarm
message during delay and then detects and isolate the
malicious node from network. We also notice that not only
the same kind of attacks but also the same kind of
countermeasures can appear in multiple layer. For example,
misbehavior detection techniques can be applied to almost all
the layers we discussed.
Responsive Parameter based an AntiWorm Approach to Prevent Wormhole Attack in...IDES Editor
The recent advancements in the wireless technology
and their wide-spread deployment have made remarkable
enhancements in efficiency in the corporate and industrial
and Military sectors The increasing popularity and usage of
wireless technology is creating a need for more secure wireless
Ad hoc networks. This paper aims researched and developed
a new protocol that prevents wormhole attacks on a ad hoc
network. A few existing protocols detect wormhole attacks but
they require highly specialized equipment not found on most
wireless devices. This paper aims to develop a defense against
wormhole attacks as an Anti-worm protocol which is based on
responsive parameters, that does not require as a significant
amount of specialized equipment, trick clock synchronization,
no GPS dependencies.
Cloud Security and Data Integrity with Client Accountability FrameworkIDES Editor
The Cloud based services provide much efficient
and seamless ways for data sharing across the cloud. The fact
that the data owners no longer possess data makes it very
difficult to assure data confidentiality and to enable secure
data sharing in the cloud. Despite of all its advantages this
will remain a major limitation that acts as a barrier to the
wider deployment of cloud based services. One of the possible
ways for ensuring trust in this aspect is the introduction of
accountability feature in the cloud computing scenario. The
Cloud framework requires promotion of distributed
accountability for such dynamic environment[1]. In some
works, there‘s an accountable framework suggested to ensure
distributed accountability for data sharing by the generation
of only a log of data access, but without any embedded feedback
mechanism for owner permission towards data
protection[2].The proposed system is an enhanced client
accountability framework which provides an additional client
side verification for each access towards enhanced security of
data. The integrity of content of data which resides in the
cloud service provider is also maintained by secured
outsourcing. Besides, the authentication of JAR(Java Archive)
files are done to ensure file protection and to maintain a safer
environment for data sharing. The analysis of various
functionalities of the framework depicts both the
accountability and security feature in an efficient manner.
Genetic Algorithm based Layered Detection and Defense of HTTP BotnetIDES Editor
A System state in HTTP botnet uses HTTP protocol
for the creation of chain of Botnets thereby compromising
other systems. By using HTTP protocol and port number 80,
attacks can not only be hidden but also pass through the
firewall without being detected. The DPR based detection
leads to better analysis of botnet attacks [3]. However, it
provides only probabilistic detection of the attacker and also
time consuming and error prone. This paper proposes a Genetic
algorithm based layered approach for detecting as well as
preventing botnet attacks. The paper reviews p2p firewall
implementation which forms the basis of filtering.
Performance evaluation is done based on precision, F-value
and probability. Layered approach reduces the computation
and overall time requirement [7]. Genetic algorithm promises
a low false positive rate.
Enhancing Data Storage Security in Cloud Computing Through SteganographyIDES Editor
in cloud computing data storage is a significant issue
because the entire data reside over a set of interconnected
resource pools that enables the data to be accessed through
virtual machines. It moves the application software’s and
databases to the large data centers where the management of
data is actually done. As the resource pools are situated over
various corners of the world, the management of data and
services may not be fully trustworthy. So, there are various
issues that need to be addressed with respect to the
management of data, service of data, privacy of data, security
of data etc. But the privacy and security of data is highly
challenging. To ensure privacy and security of data-at-rest in
cloud computing, we have proposed an effective and a novel
approach to ensure data security in cloud computing by means
of hiding data within images following is the concept of
steganography. The main objective of this paper is to prevent
data access from cloud data storage centers by unauthorized
users. This scheme perfectly stores data at cloud data storage
centers and retrieves data from it when it is needed.
The main tasks of a Wireless Sensor Network
(WSN) are data collection from its nodes and communication
of this data to the base station (BS). The protocols used for
communication among the WSN nodes and between the WSN
and the BS, must consider the resource constraints of nodes,
battery energy, computational capabilities and memory. The
WSN applications involve unattended operation of the network
over an extended period of time. In order to extend the lifetime
of a WSN, efficient routing protocols need to be adopted. The
proposed low power routing protocol based on tree-based
network structure reliably forwards the measured data towards
the BS using TDMA. An energy consumption analysis of the
WSN making use of this protocol is also carried out. It is
found that the network is energy efficient with an average
duty cycle of 0:7% for the WSN nodes. The OmNET++
simulation platform along with MiXiM framework is made
use of.
Permutation of Pixels within the Shares of Visual Cryptography using KBRP for...IDES Editor
The security of authentication of internet based
co-banking services should not be susceptible to high risks.
The passwords are highly vulnerable to virus attacks due to
the lack of high end embedding of security methods. In order
for the passwords to be more secure, people are generally
compelled to select jumbled up character based passwords
which are not only less memorable but are also equally prone
to insecurity. Multiple use of distributed shares has been
studied to solve the problem of authentication by algorithms
based on thresholding of pixels in image processing and visual
cryptography concepts where the subset of shares is considered
for the recovery of the original image for authentication using
correlation function[1][2].The main disadvantage in the above
study is the plain storage of shares and also one of the shares
is being supplied to the customer, which will lead to the
possibility of misuse by a third party. This paper proposes a
technique for scrambling of pixels by key based random
permutation (KBRP) within the shares before the
authentication has been attempted. Total number of shares to
be created is dependent on the multiplicity of ownership of
the account. By this method the problem of uncertainty among
the customers with regard to security, storage, retrieval of
holding of half of the shares is minimized.
This paper presents a trifocal Rotman Lens Design
approach. The effects of focal ratio and element spacing on
the performance of Rotman Lens are described. A three beam
prototype feeding 4 element antenna array working in L-band
has been simulated using RLD v1.7 software. Simulated
results show that the simulated lens has a return loss of –
12.4dB at 1.8GHz. Beam to array port phase error variation
with change in the focal ratio and element spacing has also
been investigated.
Band Clustering for the Lossless Compression of AVIRIS Hyperspectral ImagesIDES Editor
Hyperspectral images can be efficiently compressed
through a linear predictive model, as for example the one
used in the SLSQ algorithm. In this paper we exploit this
predictive model on the AVIRIS images by individuating,
through an off-line approach, a common subset of bands, which
are not spectrally related with any other bands. These bands
are not useful as prediction reference for the SLSQ 3-D
predictive model and we need to encode them via other
prediction strategies which consider only spatial correlation.
We have obtained this subset by clustering the AVIRIS bands
via the clustering by compression approach. The main result
of this paper is the list of the bands, not related with the
others, for AVIRIS images. The clustering trees obtained for
AVIRIS and the relationship among bands they depict is also
an interesting starting point for future research.
Microelectronic Circuit Analogous to Hydrogen Bonding Network in Active Site ...IDES Editor
A microelectronic circuit of block-elements
functionally analogous to two hydrogen bonding networks is
investigated. The hydrogen bonding networks are extracted
from â-lactamase protein and are formed in its active site.
Each hydrogen bond of the network is described in equivalent
electrical circuit by three or four-terminal block-element.
Each block-element is coded in Matlab. Static and dynamic
analyses are performed. The resultant microelectronic circuit
analogous to the hydrogen bonding network operates as
current mirror, sine pulse source, triangular pulse source as
well as signal modulator.
Texture Unit based Monocular Real-world Scene Classification using SOM and KN...IDES Editor
In this paper a method is proposed to discriminate
real world scenes in to natural and manmade scenes of similar
depth. Global-roughness of a scene image varies as a function
of image-depth. Increase in image depth leads to increase in
roughness in manmade scenes; on the contrary natural scenes
exhibit smooth behavior at higher image depth. This particular
arrangement of pixels in scene structure can be well explained
by local texture information in a pixel and its neighborhood.
Our proposed method analyses local texture information of a
scene image using texture unit matrix. For final classification
we have used both supervised and unsupervised learning using
K-Nearest Neighbor classifier (KNN) and Self Organizing
Map (SOM) respectively. This technique is useful for online
classification due to very less computational complexity.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.