This document presents a method for non-blind image deblurring using partial differential equations (PDEs). It introduces a PDE-based model to describe the blurring process caused by relative motion between the camera and object. The model is discretized using the Navier-Stokes equation, resulting in a PDE that can be used to deblur images. Algorithms are presented to deblur images blurred in the vertical and horizontal directions separately, as well as a combined algorithm to handle two-directional motion blur. Experimental results on blurred and noisy test images show the PDE method achieves better deblurring compared to other techniques like Wiener filtering, as measured by higher peak signal-to-noise ratio values
An Efficient Approach of Segmentation and Blind Deconvolution in Image Restor...iosrjce
This paper introduces the concept of Blind Deconvolution for restoration of a digital image and
small segments of a single image that has been degraded due to some noise. Concept of Image Restoration is
used in various areas like in Robotics to take decision, Biomedical research for analysis of tissues, cells and
cellular constituents etc. Segmentation is used to divide an image into multiple meaningful regions. Concept of
segmentation is helpful for restoration of only selected portion of the image hence reduces the complexity of the
system by focusing only on those parts of the image that need to be restored. There exist so many techniques for
the restoration of a degraded image like Wiener filter, Regularized filter, Lucy Richardson algorithm etc. All
these techniques use prior knowledge of blur kernel for restoration process. In Blind Deconvolution technique
Blur kernel initially remains unknown. This paper uses Gaussian low pass filter to convolve an image. Gaussian
low pass filter minimize the problem of ringing effect. Ringing effect occurs in image when transition between
one point to another is not clearly defined. After removing these ringing effects from the restored image,
resultant image will be clear in visibility. The aim of this paper is to provide better algorithm that can be helpful
in removing unwanted features from the image and the quality of the image can be measured in terms of
PSNR(Peak Signal-to-Noise Ratio) and MSE(Mean Square error). Proposed Technique also works well with
Motion Blur.
IMAGE DENOISING BY MEDIAN FILTER IN WAVELET DOMAINijma
The details of an image with noise may be restored by removing noise through a suitable image de-noising
method. In this research, a new method of image de-noising based on using median filter (MF) in the
wavelet domain is proposed and tested. Various types of wavelet transform filters are used in conjunction
with median filter in experimenting with the proposed approach in order to obtain better results for image
de-noising process, and, consequently to select the best suited filter. Wavelet transform working on the
frequencies of sub-bands split from an image is a powerful method for analysis of images. According to this
experimental work, the proposed method presents better results than using only wavelet transform or
median filter alone. The MSE and PSNR values are used for measuring the improvement in de-noised
images.
Use of Discrete Sine Transform for A Novel Image Denoising TechniqueCSCJournals
In this paper, we propose a new multiresolution image denoising technique using Discrete Sine Transform. Wavelet techniques have been in use for multiresolution image processing. Discrete Cosine Transform is also extensively used for image compression. Similar to the Discrete Wavelet and Discrete Cosine Transform it is now found that Discrete Sine Transform also possess some good qualities for image processing; specifically for image denoising. Algorithm for image denoising using Discrete Sine Transform is proposed with simulation works for experimental verification. The method is computationally efficient and simple in theory and application.
Despeckling of Ultrasound Imaging using Median Regularized Coupled PdeIDES Editor
This paper presents an approach for reducing speckle
in ultrasound images using Coupled Partial Differential
Equation (CPDE) which has been obtained by uniting secondorder
and the fourth-order partial differential equations. Using
PDE to reduce the speckle is the noise-smoothing methods
which is getting attention widely, because PDE can keep the
edge well when it reduces the noise. We also introduced a
median regulator to guide energy source to boost the features
in the image and regularize the diffusion. The proposed
method is tested in both simulated and real medical
ultrasound images. The proposed method is compared with
SRAD, Perona Malik diffusion and Non linear coherent
diffusion methods, our method gives better result in terms of
CNR, SSIM and FOM.
An Efficient Approach of Segmentation and Blind Deconvolution in Image Restor...iosrjce
This paper introduces the concept of Blind Deconvolution for restoration of a digital image and
small segments of a single image that has been degraded due to some noise. Concept of Image Restoration is
used in various areas like in Robotics to take decision, Biomedical research for analysis of tissues, cells and
cellular constituents etc. Segmentation is used to divide an image into multiple meaningful regions. Concept of
segmentation is helpful for restoration of only selected portion of the image hence reduces the complexity of the
system by focusing only on those parts of the image that need to be restored. There exist so many techniques for
the restoration of a degraded image like Wiener filter, Regularized filter, Lucy Richardson algorithm etc. All
these techniques use prior knowledge of blur kernel for restoration process. In Blind Deconvolution technique
Blur kernel initially remains unknown. This paper uses Gaussian low pass filter to convolve an image. Gaussian
low pass filter minimize the problem of ringing effect. Ringing effect occurs in image when transition between
one point to another is not clearly defined. After removing these ringing effects from the restored image,
resultant image will be clear in visibility. The aim of this paper is to provide better algorithm that can be helpful
in removing unwanted features from the image and the quality of the image can be measured in terms of
PSNR(Peak Signal-to-Noise Ratio) and MSE(Mean Square error). Proposed Technique also works well with
Motion Blur.
IMAGE DENOISING BY MEDIAN FILTER IN WAVELET DOMAINijma
The details of an image with noise may be restored by removing noise through a suitable image de-noising
method. In this research, a new method of image de-noising based on using median filter (MF) in the
wavelet domain is proposed and tested. Various types of wavelet transform filters are used in conjunction
with median filter in experimenting with the proposed approach in order to obtain better results for image
de-noising process, and, consequently to select the best suited filter. Wavelet transform working on the
frequencies of sub-bands split from an image is a powerful method for analysis of images. According to this
experimental work, the proposed method presents better results than using only wavelet transform or
median filter alone. The MSE and PSNR values are used for measuring the improvement in de-noised
images.
Use of Discrete Sine Transform for A Novel Image Denoising TechniqueCSCJournals
In this paper, we propose a new multiresolution image denoising technique using Discrete Sine Transform. Wavelet techniques have been in use for multiresolution image processing. Discrete Cosine Transform is also extensively used for image compression. Similar to the Discrete Wavelet and Discrete Cosine Transform it is now found that Discrete Sine Transform also possess some good qualities for image processing; specifically for image denoising. Algorithm for image denoising using Discrete Sine Transform is proposed with simulation works for experimental verification. The method is computationally efficient and simple in theory and application.
Despeckling of Ultrasound Imaging using Median Regularized Coupled PdeIDES Editor
This paper presents an approach for reducing speckle
in ultrasound images using Coupled Partial Differential
Equation (CPDE) which has been obtained by uniting secondorder
and the fourth-order partial differential equations. Using
PDE to reduce the speckle is the noise-smoothing methods
which is getting attention widely, because PDE can keep the
edge well when it reduces the noise. We also introduced a
median regulator to guide energy source to boost the features
in the image and regularize the diffusion. The proposed
method is tested in both simulated and real medical
ultrasound images. The proposed method is compared with
SRAD, Perona Malik diffusion and Non linear coherent
diffusion methods, our method gives better result in terms of
CNR, SSIM and FOM.
IOSR Journal of Electronics and Communication Engineering(IOSR-JECE) is an open access international journal that provides rapid publication (within a month) of articles in all areas of electronics and communication engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in electronics and communication engineering. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Labview with dwt for denoising the blurred biometric imagesijcsa
In this paper, biometric blurred image (fingerprint) denoising are presented and investigated by using
LabVIEW applications , It is blurred and corrupted with Gaussian noise. This work is proposed
algorithm that has used a discrete wavelet transform (DWT) to divide the image into two parts, this will
be increasing the manipulation speed of biometric images that are of the big sizes. This work has included
two tasks ; the first designs the LabVIEW system to calculate and present the approximation coefficients,
by which the image's blur factor reduced to minimum value according to the proposed algorithm. The
second task removes the image's noise by calculated the regression coefficients according to Bayesian-
Shrinkage estimation method.
An Efficient Thresholding Neural Network Technique for High Noise Densities E...CSCJournals
Medical images when infected with high noise densities lose usefulness for diagnosis and early detection purposes. Thresholding neural networks (TNN) with a new class of smooth nonlinear function have been widely used to improve the efficiency of the denoising procedure. This paper introduces better solution for medical images in noisy environments which serves in early detection of breast cancer tumor. The proposed algorithm is based on two consecutive phases. Image denoising, where an adaptive learning TNN with remarkable time improvement and good image quality is introduced. A semi-automatic segmentation to extract suspicious regions or regions of interest (ROIs) is presented as an evaluation for the proposed technique. A set of data is then applied to show algorithm superior image quality and complexity reduction especially in high noisy environments.
Removal of Gaussian noise on the image edges using the Prewitt operator and t...IOSR Journals
Abstract: Image edge detection algorithm is applied on images to remove Gaussian noise that is present in the
image during capturing or transmission using a method which combines Prewitt operator and threshold
function technique to do edge detection on the image. This method is better than a method which combines
Prewitt operator and mean filtering. In this paper, firstly use mean filtering to remove initially Gaussian noise,
then use Prewitt operator to do edge detection on the image, and finally applied a threshold function technique
with Prewitt operator.
Keywords: Gaussian noise, Prewitt operator, edge detection, threshold function
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
A comparison of image segmentation techniques, otsu and watershed for x ray i...eSAT Journals
Abstract The most dangerous and rapidly spreading disease in the world is Tuberculosis. In the investigating for suspected tuberculosis (TB), chest radiography is the only key techniques of diagnosis based on the medical imaging So, Computer aided diagnosis (CAD) has been popular and many researchers are interested in this research areas and different approaches have been proposed for the TB detection. Image segmentation plays a great importance in most medical imaging, by extracting the anatomical structures from images. There exist many image segmentation techniques in the literature, each of them having their own advantages and disadvantages. The aim of X-ray segmentation is to subdivide the image in different portions, so that it can help during the study the structure of the bone, for the detection of disorder. The goal of this paper is to review the most important image segmentation methods starting from a data base composed by real X-ray images. Keywords— chest radiography, computer aided diagnosis, image segmentation, anatomical structures, real X-rays.
A NOVEL ALGORITHM FOR IMAGE DENOISING USING DT-CWT sipij
This paper addresses image enhancement system consisting of image denoising technique based on Dual Tree Complex Wavelet Transform (DT-CWT) . The proposed algorithm at the outset models the noisy remote sensing image (NRSI) statistically by aptly amalgamating the structural features and textures from it. This statistical model is decomposed using DTCWT with Tap-10 or length-10 filter banks based on
Farras wavelet implementation and sub band coefficients are suitably modeled to denoise with a method which is efficiently organized by combining the clustering techniques with soft thresholding - softclustering technique. The clustering techniques classify the noisy and image pixels based on the
neighborhood connected component analysis(CCA), connected pixel analysis and inter-pixel intensity variance (IPIV) and calculate an appropriate threshold value for noise removal. This threshold value is used with soft thresholding technique to denoise the image .Experimental results shows that that the
proposed technique outperforms the conventional and state-of-the-art techniques .It is also evaluated that the denoised images using DTCWT (Dual Tree Complex Wavelet Transform) is better balance between smoothness and accuracy than the DWT.. We used the PSNR (Peak Signal to Noise Ratio) along with
RMSE to assess the quality of denoised images.
Review of Image Segmentation Techniques based on Region Merging ApproachEditor IJMTER
Image segmentation is an important task in computer vision and object recognition. Since
fully automatic image segmentation is usually very hard for natural images, interactive schemes with a
few simple user inputs are good solutions. In image segmentation the image is dividing into various
segments for processing images. The complexity of image content is a bigger challenge for carrying out
automatic image segmentation. On regions based scheme, the images are merged based on the similarity
criteria depending upon comparing the mean values of both the regions to be merged. So, the similar
regions are then merged and the dissimilar regions are merged together.
This slide is about introduction of blurred image recognition system using legendre's moment invariant algorithm and explain about blurred image will be recognized and converted into original image
An improvised tree algorithm for association rule mining using transaction re...Editor IJCATR
Association rule mining technique plays an important role in data mining research where the aim is to find interesting
correlations between sets of items in databases. The apriori algorithm has been the most popular techniques in finding frequent
patterns. However, when applying this method a database has to be scanned many times to calculate the counts of the huge umber
of candidate items sets. A new algorithm has been proposed as a solution to this problem. The proposed algorithm is mainly
concentrated to reduce the candidate sets generation and also aimed to increase the time of execution of the process
Automatic Seed Classification by Shape and Color Features using Machine Visio...Editor IJCATR
In this paper the proposed system uses content based image retrieval (CBIR) technique for identification of seed e.g.
wheat, rice, gram etc. on the basis of their features. CBIR is a technique to identify or recognize the image on the basis of features
present in image. Basically features are classified in to four categories 1.color 2.Shape 3. texture 4. size .In this system we are
extracting color, shape feature extraction. After that classifying images in to categories using neural network according to the
weights and image displayed from the category for which neural network shows maximum weight. category1 belongs to wheat and
category2 belongs to gram. Experiment was conducted on 200 images of wheat and gram by using Euclidean distance(ED) and
artificial neural network techniques. From 200 images 150 are used for training purpose and 50 images are used for testing
purpose. The precision rate of the system by using ED is 84.4 percent By using Artificial neural network precision rate is 95
percent.
Enhancing Web-Security with Stronger CaptchasEditor IJCATR
Captcha are used widely over the World Wide Web to prevent automated programs in order to scrape a data from
websites. Captcha is a challenge response test used to ensure that the response is generated by a person not by a computer. Users
are asked to read and type a string of distorted characters in order to ensure that the user is human or not. Automation is real
problem for web application. Automated attacks can exploit many services:
1. Blogs 2. Forums 3. Phishing 4. Theft of data
Registration Websites use CAPTCHA (completely automated public turing test to tell computers and human apart) systems to
prevent the bot programs from wasting their resources. Today is the Era of where technologies are changes very rapidly. So
spammers are hackers are also trying something new to cracking captcha. That’s why it is necessary to developing an advanced
technology to generating a captcha. Just like simply generating a Captcha Images from text, or rotating an object within images.
IOSR Journal of Electronics and Communication Engineering(IOSR-JECE) is an open access international journal that provides rapid publication (within a month) of articles in all areas of electronics and communication engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in electronics and communication engineering. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Labview with dwt for denoising the blurred biometric imagesijcsa
In this paper, biometric blurred image (fingerprint) denoising are presented and investigated by using
LabVIEW applications , It is blurred and corrupted with Gaussian noise. This work is proposed
algorithm that has used a discrete wavelet transform (DWT) to divide the image into two parts, this will
be increasing the manipulation speed of biometric images that are of the big sizes. This work has included
two tasks ; the first designs the LabVIEW system to calculate and present the approximation coefficients,
by which the image's blur factor reduced to minimum value according to the proposed algorithm. The
second task removes the image's noise by calculated the regression coefficients according to Bayesian-
Shrinkage estimation method.
An Efficient Thresholding Neural Network Technique for High Noise Densities E...CSCJournals
Medical images when infected with high noise densities lose usefulness for diagnosis and early detection purposes. Thresholding neural networks (TNN) with a new class of smooth nonlinear function have been widely used to improve the efficiency of the denoising procedure. This paper introduces better solution for medical images in noisy environments which serves in early detection of breast cancer tumor. The proposed algorithm is based on two consecutive phases. Image denoising, where an adaptive learning TNN with remarkable time improvement and good image quality is introduced. A semi-automatic segmentation to extract suspicious regions or regions of interest (ROIs) is presented as an evaluation for the proposed technique. A set of data is then applied to show algorithm superior image quality and complexity reduction especially in high noisy environments.
Removal of Gaussian noise on the image edges using the Prewitt operator and t...IOSR Journals
Abstract: Image edge detection algorithm is applied on images to remove Gaussian noise that is present in the
image during capturing or transmission using a method which combines Prewitt operator and threshold
function technique to do edge detection on the image. This method is better than a method which combines
Prewitt operator and mean filtering. In this paper, firstly use mean filtering to remove initially Gaussian noise,
then use Prewitt operator to do edge detection on the image, and finally applied a threshold function technique
with Prewitt operator.
Keywords: Gaussian noise, Prewitt operator, edge detection, threshold function
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
A comparison of image segmentation techniques, otsu and watershed for x ray i...eSAT Journals
Abstract The most dangerous and rapidly spreading disease in the world is Tuberculosis. In the investigating for suspected tuberculosis (TB), chest radiography is the only key techniques of diagnosis based on the medical imaging So, Computer aided diagnosis (CAD) has been popular and many researchers are interested in this research areas and different approaches have been proposed for the TB detection. Image segmentation plays a great importance in most medical imaging, by extracting the anatomical structures from images. There exist many image segmentation techniques in the literature, each of them having their own advantages and disadvantages. The aim of X-ray segmentation is to subdivide the image in different portions, so that it can help during the study the structure of the bone, for the detection of disorder. The goal of this paper is to review the most important image segmentation methods starting from a data base composed by real X-ray images. Keywords— chest radiography, computer aided diagnosis, image segmentation, anatomical structures, real X-rays.
A NOVEL ALGORITHM FOR IMAGE DENOISING USING DT-CWT sipij
This paper addresses image enhancement system consisting of image denoising technique based on Dual Tree Complex Wavelet Transform (DT-CWT) . The proposed algorithm at the outset models the noisy remote sensing image (NRSI) statistically by aptly amalgamating the structural features and textures from it. This statistical model is decomposed using DTCWT with Tap-10 or length-10 filter banks based on
Farras wavelet implementation and sub band coefficients are suitably modeled to denoise with a method which is efficiently organized by combining the clustering techniques with soft thresholding - softclustering technique. The clustering techniques classify the noisy and image pixels based on the
neighborhood connected component analysis(CCA), connected pixel analysis and inter-pixel intensity variance (IPIV) and calculate an appropriate threshold value for noise removal. This threshold value is used with soft thresholding technique to denoise the image .Experimental results shows that that the
proposed technique outperforms the conventional and state-of-the-art techniques .It is also evaluated that the denoised images using DTCWT (Dual Tree Complex Wavelet Transform) is better balance between smoothness and accuracy than the DWT.. We used the PSNR (Peak Signal to Noise Ratio) along with
RMSE to assess the quality of denoised images.
Review of Image Segmentation Techniques based on Region Merging ApproachEditor IJMTER
Image segmentation is an important task in computer vision and object recognition. Since
fully automatic image segmentation is usually very hard for natural images, interactive schemes with a
few simple user inputs are good solutions. In image segmentation the image is dividing into various
segments for processing images. The complexity of image content is a bigger challenge for carrying out
automatic image segmentation. On regions based scheme, the images are merged based on the similarity
criteria depending upon comparing the mean values of both the regions to be merged. So, the similar
regions are then merged and the dissimilar regions are merged together.
This slide is about introduction of blurred image recognition system using legendre's moment invariant algorithm and explain about blurred image will be recognized and converted into original image
An improvised tree algorithm for association rule mining using transaction re...Editor IJCATR
Association rule mining technique plays an important role in data mining research where the aim is to find interesting
correlations between sets of items in databases. The apriori algorithm has been the most popular techniques in finding frequent
patterns. However, when applying this method a database has to be scanned many times to calculate the counts of the huge umber
of candidate items sets. A new algorithm has been proposed as a solution to this problem. The proposed algorithm is mainly
concentrated to reduce the candidate sets generation and also aimed to increase the time of execution of the process
Automatic Seed Classification by Shape and Color Features using Machine Visio...Editor IJCATR
In this paper the proposed system uses content based image retrieval (CBIR) technique for identification of seed e.g.
wheat, rice, gram etc. on the basis of their features. CBIR is a technique to identify or recognize the image on the basis of features
present in image. Basically features are classified in to four categories 1.color 2.Shape 3. texture 4. size .In this system we are
extracting color, shape feature extraction. After that classifying images in to categories using neural network according to the
weights and image displayed from the category for which neural network shows maximum weight. category1 belongs to wheat and
category2 belongs to gram. Experiment was conducted on 200 images of wheat and gram by using Euclidean distance(ED) and
artificial neural network techniques. From 200 images 150 are used for training purpose and 50 images are used for testing
purpose. The precision rate of the system by using ED is 84.4 percent By using Artificial neural network precision rate is 95
percent.
Enhancing Web-Security with Stronger CaptchasEditor IJCATR
Captcha are used widely over the World Wide Web to prevent automated programs in order to scrape a data from
websites. Captcha is a challenge response test used to ensure that the response is generated by a person not by a computer. Users
are asked to read and type a string of distorted characters in order to ensure that the user is human or not. Automation is real
problem for web application. Automated attacks can exploit many services:
1. Blogs 2. Forums 3. Phishing 4. Theft of data
Registration Websites use CAPTCHA (completely automated public turing test to tell computers and human apart) systems to
prevent the bot programs from wasting their resources. Today is the Era of where technologies are changes very rapidly. So
spammers are hackers are also trying something new to cracking captcha. That’s why it is necessary to developing an advanced
technology to generating a captcha. Just like simply generating a Captcha Images from text, or rotating an object within images.
The Mathematics of Social Network Analysis: Metrics for Academic Social NetworksEditor IJCATR
Social network analysis plays an important role in analyzing social relations and patterns of interaction among actors in a
social network. Such networks can be casual, like those on social media sites, or formal, like academic social networks. Each of these
networks is characterised by underlying data which defines various features of the network. Keeping in view the size and diversity of
these networks it may not be possible to dissect entire network with conventional means. Social network visualization can be used to
graphically represent these networks in a concise and easy to understand manner. Social network visualization tools rely heavily on
quantitative features to numerically define various attributes of the network. These features also referred to as social network metrics
used everyday mathematics as their foundations. In this paper we provide an overview of various social network analysis metrics that
are commonly used to analyse social networks. Explanation of these metrics and their relevance for academic social networks is also
outlined
The ultimate across the board user authentication approach in use today is evidently the password-based authentication.
When we carry out a credit card transaction through the EDC (Electronic Data Capture) machine in the public, the user’s PIN number
becomes very much vulnerable to the direct observation by nearby adversaries in huddled places, promoted by vision enhancing and/or
recording appliances. Devising a secure PIN entry method during the credit card transaction in such a situation is a strenuous task.
Currently, there is no pragmatic solution being implemented for this problem. This paper starts with the investigation of the current
status about the direct experiential attacks. Our analysis about these attacks terminates that no practical available solution at present for
these direct observational attacks. This paper introduces a model which attempts to make the PIN number entry secure during credit
card transactions in public places. Our model aims to use the user’s mobile phone for PIN number entry rather than the merchant’s
user machine. The best tract about the proposed model is that the PIN number does not get revealed to any of the direct observational
attacks, be it direct human observation or observation by a video camera.
Effect of Heat Treatment on Corrosion Behavior of Spring SteelsEditor IJCATR
The experimental work deals with the effect of heat treatment on the corrosion behaviour of spring steels. In this study the
heat treatments like hardening, normalizing and tempering were done for spring steels to obtain martensitic matrix, pearlitic structure
and tempered martensitic matrix respectively. After heat treatment the microstructural studies were carried out for the samples using
SEM. Hardness measurements were done. The corrosion behaviour of all heat treated samples in HCl at different concentration (1.5N,
2N and 2.5N) was determined using Tafel extrapolation technique. The variation in the corrosion rates due to the effect of heat
treatment was noted. The results indicate that for fully martensitic matrix the corrosion rate is minimum and for pearlitic structure its
maximum. As tempering time is increased the corrosion rate increases correspondingly. The corroded microstructural images were
also taken using SEM and analysed.
Design and Implementing Novel Independent Real-Time Software Programmable DAQ...Editor IJCATR
The crucial features of many demanding applications like industry and aerospace are data acquisition and telemetry. It is
vital to observe and analyse the real time performance, in launch vehicle systems,so that designs can be certified and tuneablefactors
could be regulated to intensification the act and competence. At present used DAQ structures are of augmented size, weight and turn out
to be exorbitant and power hungry. This article introduce a new mission-independent real time software programmable DAQ system
using multipurpose MCU and sigma delta ADCs are planned,taking into account size, weight, costand act without compromiseon
precision, firmness and drift act. Additional digital filtering steps are also added to progress the system act. This system isproficientfor
directconnectionswithdiverse pressure and temperature sensors whichinterfaces 32 low frequency channel and two high frequency
channels. The system planned operates in two modes; one is data acquisition mode and another is program mode. Operativepower
lesseningmethods and wireless interface protocol between diverse data acquisition modules is also affected upon as avenues for future
work.
AN INVERTED LIST BASED APPROACH TO GENERATE OPTIMISED PATH IN DSR IN MANETS –...Editor IJCATR
In this paper, we design and formulate the inverted list based approach for providing safer path and effective
communication in DSR protocol.Some nodes in network can participate in network more frequenctly whereas some nodes are not
participating. Because of this there is the requirement of such an approach that will take an intelligent decision regarding the sharing of
bandwidth or the resource to a node or the node group. Dynamic source routing protocol (DSR) is an on-demand, source routing
protocol , whereby all the routing information is maintained (continually updated) at mobile nodes.
Cognitive Radio: An Emerging trend for better Spectrum UtilizationEditor IJCATR
Due to the rapid development of wireless communications in recent years, the demand on wireless spectrum has been growing dramatically, resulting in the spectrum scarcity problem. Works have shown that the fixed spectrum allocation policy commonly adopted today suffer from the low spectrum utilization problem. Both academic and regulatory bodies have focused on dynamic spectrum access to fully utilize the scarce spectrum resource. Cognitive radio, with the capability to flexibly adapt its parameters, has been proposed as the enabling technology for unlicensed secondary users to dynamically access the licensed spectrum owned by legacy primary users on a negotiated or an opportunistic basis. In this paper we present a volumetric survey on various methods used to adapt changes used in cognitive radio.
A Review on Feature Extraction Techniques and General Approach for Face Recog...Editor IJCATR
In recent time, alongwith the advances and new inventions in science and technology, fraud people and identity thieves are
also becoming smarter by finding new ways to fool the authorization and authentication process. So, there is a strong need of efficient
face recognition process or computer systems capable of recognizing faces of authenticated persons. One way to make face recognition
efficient is by extracting features of faces. Several feature extraction techniques are available such as template based, appearancebased,
geometry based, color segmentation based, etc. This paper presents an overview of various feature extraction techniques
followed in different reasearches for face recognition in the field of digital image processing and gives an approach for using these
feature extraction techniques for efficient face recognition
Co-Extracting Opinions from Online ReviewsEditor IJCATR
Exclusion of opinion targets and words from online reviews is an important and challenging task in opinion mining. The
opinion mining is the use of natural language processing, text analysis and computational process to identify and recover the subjective
information in source materials. This paper propose a Supervised word alignment model, which identifying the opinion relation. Rather
than this paper focused on topical relation, in which to extract the relevant information or features only from a particular online reviews.
It is based on feature extraction algorithm to identify the potential features. Finally the items are ranked based on the frequency of
positive and negative reviews. Compared to previous methods, our model captures opinion relation and feature extraction more precisely.
One of the most advantages that our model obtain better precision because of supervised alignment model. In addition, an opinion
relation graph is used to refer the relationship between opinion targets and opinion words.
Visual Image Quality Assessment Technique using FSIMEditor IJCATR
The goal of quality assessment (QA) research is to design algorithms that can automatically
assess the quality of images in a perceptually consistent manner. Image QA algorithms generally
interpret image quality as fidelity or similarity with a “reference” or “perfect” image in some perceptual
space. In order to improve the assessment accuracy of white noise, Gauss blur, JPEG2000 compression
and other distorted images, this paper puts forward an image quality assessment method based on phase
congruency and gradient magnitude. The experimental results show that the image quality assessment
method has a higher accuracy than traditional method and it can accurately reflect the image visual
perception of the human eye. In this paper, we propose an image information measure that quantifies the
information that is present in the reference image and how much of this reference information can be
extracted from the distorted image.
A Proposed Simulation Model of Automatic Machine For House Paint Selection Us...Editor IJCATR
One important criterion considered by the public at the time of house selection or property is the aspect of house paint
colors. Principally, the selection of house paint is not exessively complicated, but if the homeowner does not have enough knowledge
about how to make a good combination with the paint color, the consequence is the color of choice is not in accordance with their
expected. This study aims to provide a simulation of how to do a combination of paint color selection using automated machines. The
method used in the design of this model is the Prototype method and blackbox testing to ensure the system is made valid and reliable.
In the mean while, the approach automata models used are Finite State Automata (FSA) has the characteristics due to work by
identifying and capturing the pattern in the compilation process of determining the color of wall paint the house. This paper also
resulted in the identification of input symbols, FSA-NFA diagrams, rules production, NFA, equivalence transition table of FSA-DFA,
and FSA-DFA transition diagram. At the same time, in the design of a prototype, also produced mockup simulation applications in
order to generate automatic machine house paint selection. Thus, problems such as confusion and lack of knowledge of homeowners
against a combination of paint colors can be overcome properly.
A Review of Machine Learning based Anomaly Detection TechniquesEditor IJCATR
Intrusion detection is so much popular since the last two decades where intrusion is attempted to break into or misuse
the system. It is mainly of two types based on the intrusions, first is Misuse or signature based detection and the other is Anomaly
detection. In this paper Machine learning based methods which are one of the types of Anomaly detection techniques is
discussed.
Solving Multi-level, Multi-product and Multi-period Lot Sizing and Scheduling...Editor IJCATR
In this paper, a new model of capacitated lot sizing and scheduling in a permutation flow shop is developed. In this model
demand can be totally backlogged. Setups can be carryover and are sequence-dependent. It is well-known from literatures that
capacitated lot sizing problem in permutation flow shop systems are NP-hard. This means the model is solved in polynomial time and
metaheuristics algorithms are capable of solving these problems within reasonable computing load. Metaheuristic algorithms find more
applications in recent researches. On this concern this paper proposes two evolutionary algorithms, one of the most popular namely,
Genetic Algorithm (GA) and one of the most powerful population base algorithms namely, Imperialist Competitive Algorithm (ICA).
The proposed algorithms are calibrate by Taguchi method and be compared against a presented lower bound. Some numerical
examples are solved by both the algorithms and the lower bound. The quality of solution obtained by the proposed algorithm showed
superiority of ICA to GA.
Cooperative Demonstrable Data Retention for Integrity Verification in Multi-C...Editor IJCATR
Demonstrable data retention (DDR) is a technique which certain the integrity of data in storage outsourcing. In this paper we propose an efficient DDR protocol that prevent attacker in gaining information from multiple cloud storage node. Our technique is for distributed cloud storage and support the scalability of services and data migration. This technique Cooperative store and maintain the client‟s data on multi cloud storage. To insure the security of our technique we use zero-knowledge proof system, which satisfies zero-knowledge properties, knowledge soundness and completeness. We present a Cooperative DDR (CDDR) protocol based on hash index hierarchy and homomorphic verification response. In order to optimize the performance of our technique we use a novel technique for selecting optimal parameter values to reduce the storage overhead and computation costs of client for service providers.
Isolation and Screening of Hydrogen Producing Bacterial Strain from Sugarcane...Editor IJCATR
The aim of this study is to isolate a highly competent bacterium with potent cellulose degrading capability and a better
hydrogen producer. Soil sample from sugarcane bagasse yard was isolated, serially diluted and plated on cellulose specific nutrient
agar plate. Four colonies have been isolated in which a single colony has potent cellulose degrading ability and the highest hydrogen
productivity of 275.13 mL H2 L-1. The newly isolated bacterium was morphologically and biochemically characterized. The
molecular characterization of the bacterium was carried out using 16S rDNA sequencing and the organism was identified as
Bacilllus subtilis AuChE413. Proteomic analysis such as MALDI-TOF was carried out to differentiate the isolated Bacillus subtilis
from Bacillus thuringiensis and Bacillus amyloliquefaciens. Phylogenetic tree was constructed to analyze the evolutionary
relationship among different genus and species with the newly isolated strain.
Performance Comparison of Digital Image Watermarking Techniques: A SurveyEditor IJCATR
Digital watermarking is the processing of combined information into a digital signal. A watermark is a secondary image,
which is overlaid on the host image, and provides a means of protecting the image. In order to provide high quality watermarked
image, the watermarked image should be imperceptible. This paper presents different techniques of digital image watermarking based
on spatial & frequency domain, which shows that spatial domain technique provides security & successful recovery of watermark
image and higher PSNR value compared to frequency domain.
MULTIFOCUS IMAGE FUSION USING MULTIRESOLUTION APPROACH WITH BILATERAL GRADIEN...cscpconf
The fusion of two or more images is required for images captured using different sensors,
different modalities or different camera settings to produce the image which is more suitable for
computer processing and human visual perception. The optical lenses in the cameras are having
limited depth of focus so it is not possible to acquire an image that contains all the objects infocus.
In this case we need a Multifocus image fusion technique to create a single image where
all objects are in-focus by combining relevant information in the two or more images. As the
sharp images contain more information than blurred images image sharpness will be taken as
one of the relevant information in framing the fusion rule. Many existing algorithms use
contrast or high local energy as a measure of local sharpness (relevant information). In
practice particularly in multimodal image fusion this assumption is not true. Here in this paper
we are proposing the method which combines the multiresolution transform and local phase
coherence measure to measure the sharpness in the images. The performance of the fusion
process was evaluated with mutual information, edge-association and spatial frequency as
quality metrics and compared with Laplacian pyramid, DWT (Discrete Wavelet Transform) and
bilateral gradient based sharpness criterion methods etc. The results showed that the proposed
algorithm is performing better than the existing ones.
A Novel and Robust Wavelet based Super Resolution Reconstruction of Low Resol...CSCJournals
High Resolution images can be reconstructed from several blurred, noisy and aliased low resolution images using a computational process know as super resolution reconstruction. Super resolution reconstruction is the process of combining several low resolution images into a single higher resolution image. In this paper we concentrate on a special case of super resolution problem where the wrap is composed of pure translation and rotation, the blur is space invariant and the noise is additive white Gaussian noise. Super resolution reconstruction consists of registration, restoration and interpolation phases. Once the Low resolution image are registered with respect to a reference frame then wavelet based restoration is performed to remove the blur and noise from the images, finally the images are interpolated using adaptive interpolation. We are proposing an efficient wavelet based denoising with adaptive interpolation for super resolution reconstruction. Under this frame work, the low resolution images are decomposed into many levels to obtain different frequency bands. Then our proposed novel soft thresholding technique is used to remove the noisy coefficients, by fixing optimum threshold value. In order to obtain an image of higher resolution we have proposed an adaptive interpolation technique. Our proposed wavelet based denoising with adaptive interpolation for super resolution reconstruction preserves the edges as well as smoothens the image without introducing artifacts. Experimental results show that the proposed approach has succeeded in obtaining a high-resolution image with a high PSNR, ISNR ratio and a good visual quality.
Visual Quality for both Images and Display of Systems by Visual Enhancement u...IJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
International Journal of Modern Engineering Research (IJMER) covers all the fields of engineering and science: Electrical Engineering, Mechanical Engineering, Civil Engineering, Chemical Engineering, Computer Engineering, Agricultural Engineering, Aerospace Engineering, Thermodynamics, Structural Engineering, Control Engineering, Robotics, Mechatronics, Fluid Mechanics, Nanotechnology, Simulators, Web-based Learning, Remote Laboratories, Engineering Design Methods, Education Research, Students' Satisfaction and Motivation, Global Projects, and Assessment…. And many more.
Advance in Image and Audio Restoration and their Assessments: A ReviewIJCSES Journal
Image restoration is the process of restoring the original image from a degraded one. Images can be affected by various types of noise, such as Gaussian noise, impulse noise, and affected by blurring, which is happened during image recordings like motion blur, Out-of-Focus Blur, and others. Image restoration techniques are used to reverse the effect of noise and blurring. Restoration of distorted images can be done using some information about noise and the blurring nature or without any knowledge about the image degradation process. Researchers have proposed many algorithms in this regard; in this paper, different noise and degradation models and restoration methods will be discussed and review some researches in this field.
Wavelet Transform based Medical Image Fusion With different fusion methodsIJERA Editor
This paper proposes wavelet transform based image fusion algorithm, after studying the principles and characteristics of the discrete wavelet transform. Medical image fusion used to derive useful information from multimodality medical images. The idea is to improve the image content by fusing images like computer tomography (CT) and magnetic resonance imaging (MRI) images, so as to provide more information to the doctor and clinical treatment planning system. This paper based on the wavelet transformation to fused the medical images. The wavelet based fusion algorithms used on medical images CT and MRI, This involve the fusion with MIN , MAX, MEAN method. Also the result is obtained. With more available multimodality medical images in clinical applications, the idea of combining images from different modalities become very important and medical image fusion has emerged as a new promising research field
Text Mining in Digital Libraries using OKAPI BM25 ModelEditor IJCATR
The emergence of the internet has made vast amounts of information available and easily accessible online. As a result, most libraries have digitized their content in order to remain relevant to their users and to keep pace with the advancement of the internet. However, these digital libraries have been criticized for using inefficient information retrieval models that do not perform relevance ranking to the retrieved results. This paper proposed the use of OKAPI BM25 model in text mining so as means of improving relevance ranking of digital libraries. Okapi BM25 model was selected because it is a probability-based relevance ranking algorithm. A case study research was conducted and the model design was based on information retrieval processes. The performance of Boolean, vector space, and Okapi BM25 models was compared for data retrieval. Relevant ranked documents were retrieved and displayed at the OPAC framework search page. The results revealed that Okapi BM 25 outperformed Boolean model and Vector Space model. Therefore, this paper proposes the use of Okapi BM25 model to reward terms according to their relative frequencies in a document so as to improve the performance of text mining in digital libraries.
Green Computing, eco trends, climate change, e-waste and eco-friendlyEditor IJCATR
This study focused on the practice of using computing resources more efficiently while maintaining or increasing overall performance. Sustainable IT services require the integration of green computing practices such as power management, virtualization, improving cooling technology, recycling, electronic waste disposal, and optimization of the IT infrastructure to meet sustainability requirements. Studies have shown that costs of power utilized by IT departments can approach 50% of the overall energy costs for an organization. While there is an expectation that green IT should lower costs and the firm’s impact on the environment, there has been far less attention directed at understanding the strategic benefits of sustainable IT services in terms of the creation of customer value, business value and societal value. This paper provides a review of the literature on sustainable IT, key areas of focus, and identifies a core set of principles to guide sustainable IT service design.
Policies for Green Computing and E-Waste in NigeriaEditor IJCATR
Computers today are an integral part of individuals’ lives all around the world, but unfortunately these devices are toxic to the environment given the materials used, their limited battery life and technological obsolescence. Individuals are concerned about the hazardous materials ever present in computers, even if the importance of various attributes differs, and that a more environment -friendly attitude can be obtained through exposure to educational materials. In this paper, we aim to delineate the problem of e-waste in Nigeria and highlight a series of measures and the advantage they herald for our country and propose a series of action steps to develop in these areas further. It is possible for Nigeria to have an immediate economic stimulus and job creation while moving quickly to abide by the requirements of climate change legislation and energy efficiency directives. The costs of implementing energy efficiency and renewable energy measures are minimal as they are not cash expenditures but rather investments paid back by future, continuous energy savings.
Performance Evaluation of VANETs for Evaluating Node Stability in Dynamic Sce...Editor IJCATR
Vehicular ad hoc networks (VANETs) are a favorable area of exploration which empowers the interconnection amid the movable vehicles and between transportable units (vehicles) and road side units (RSU). In Vehicular Ad Hoc Networks (VANETs), mobile vehicles can be organized into assemblage to promote interconnection links. The assemblage arrangement according to dimensions and geographical extend has serious influence on attribute of interaction .Vehicular ad hoc networks (VANETs) are subclass of mobile Ad-hoc network involving more complex mobility patterns. Because of mobility the topology changes very frequently. This raises a number of technical challenges including the stability of the network .There is a need for assemblage configuration leading to more stable realistic network. The paper provides investigation of various simulation scenarios in which cluster using k-means algorithm are generated and their numbers are varied to find the more stable configuration in real scenario of road.
Optimum Location of DG Units Considering Operation ConditionsEditor IJCATR
The optimal sizing and placement of Distributed Generation units (DG) are becoming very attractive to researchers these days. In this paper a two stage approach has been used for allocation and sizing of DGs in distribution system with time varying load model. The strategic placement of DGs can help in reducing energy losses and improving voltage profile. The proposed work discusses time varying loads that can be useful for selecting the location and optimizing DG operation. The method has the potential to be used for integrating the available DGs by identifying the best locations in a power system. The proposed method has been demonstrated on 9-bus test system.
Analysis of Comparison of Fuzzy Knn, C4.5 Algorithm, and Naïve Bayes Classifi...Editor IJCATR
Early detection of diabetes mellitus (DM) can prevent or inhibit complication. There are several laboratory test that must be done to detect DM. The result of this laboratory test then converted into data training. Data training used in this study generated from UCI Pima Database with 6 attributes that were used to classify positive or negative diabetes. There are various classification methods that are commonly used, and in this study three of them were compared, which were fuzzy KNN, C4.5 algorithm and Naïve Bayes Classifier (NBC) with one identical case. The objective of this study was to create software to classify DM using tested methods and compared the three methods based on accuracy, precision, and recall. The results showed that the best method was Fuzzy KNN with average and maximum accuracy reached 96% and 98%, respectively. In second place, NBC method had respective average and maximum accuracy of 87.5% and 90%. Lastly, C4.5 algorithm had average and maximum accuracy of 79.5% and 86%, respectively.
Web Scraping for Estimating new Record from Source SiteEditor IJCATR
Study in the Competitive field of Intelligent, and studies in the field of Web Scraping, have a symbiotic relationship mutualism. In the information age today, the website serves as a main source. The research focus is on how to get data from websites and how to slow down the intensity of the download. The problem that arises is the website sources are autonomous so that vulnerable changes the structure of the content at any time. The next problem is the system intrusion detection snort installed on the server to detect bot crawler. So the researchers propose the use of the methods of Mining Data Records and the method of Exponential Smoothing so that adaptive to changes in the structure of the content and do a browse or fetch automatically follow the pattern of the occurrences of the news. The results of the tests, with the threshold 0.3 for MDR and similarity threshold score 0.65 for STM, using recall and precision values produce f-measure average 92.6%. While the results of the tests of the exponential estimation smoothing using ? = 0.5 produces MAE 18.2 datarecord duplicate. It slowed down to 3.6 datarecord from 21.8 datarecord results schedule download/fetch fix in an average time of occurrence news.
Evaluating Semantic Similarity between Biomedical Concepts/Classes through S...Editor IJCATR
Most of the existing semantic similarity measures that use ontology structure as their primary source can measure semantic similarity between concepts/classes using single ontology. The ontology-based semantic similarity techniques such as structure-based semantic similarity techniques (Path Length Measure, Wu and Palmer’s Measure, and Leacock and Chodorow’s measure), information content-based similarity techniques (Resnik’s measure, Lin’s measure), and biomedical domain ontology techniques (Al-Mubaid and Nguyen’s measure (SimDist)) were evaluated relative to human experts’ ratings, and compared on sets of concepts using the ICD-10 “V1.0” terminology within the UMLS. The experimental results validate the efficiency of the SemDist technique in single ontology, and demonstrate that SemDist semantic similarity techniques, compared with the existing techniques, gives the best overall results of correlation with experts’ ratings.
Semantic Similarity Measures between Terms in the Biomedical Domain within f...Editor IJCATR
The techniques and tests are tools used to define how measure the goodness of ontology or its resources. The similarity between biomedical classes/concepts is an important task for the biomedical information extraction and knowledge discovery. However, most of the semantic similarity techniques can be adopted to be used in the biomedical domain (UMLS). Many experiments have been conducted to check the applicability of these measures. In this paper, we investigate to measure semantic similarity between two terms within single ontology or multiple ontologies in ICD-10 “V1.0” as primary source, and compare my results to human experts score by correlation coefficient.
A Strategy for Improving the Performance of Small Files in Openstack Swift Editor IJCATR
This is an effective way to improve the storage access performance of small files in Openstack Swift by adding an aggregate storage module. Because Swift will lead to too much disk operation when querying metadata, the transfer performance of plenty of small files is low. In this paper, we propose an aggregated storage strategy (ASS), and implement it in Swift. ASS comprises two parts which include merge storage and index storage. At the first stage, ASS arranges the write request queue in chronological order, and then stores objects in volumes. These volumes are large files that are stored in Swift actually. During the short encounter time, the object-to-volume mapping information is stored in Key-Value store at the second stage. The experimental results show that the ASS can effectively improve Swift's small file transfer performance.
Integrated System for Vehicle Clearance and RegistrationEditor IJCATR
Efficient management and control of government's cash resources rely on government banking arrangements. Nigeria, like many low income countries, employed fragmented systems in handling government receipts and payments. Later in 2016, Nigeria implemented a unified structure as recommended by the IMF, where all government funds are collected in one account would reduce borrowing costs, extend credit and improve government's fiscal policy among other benefits to government. This situation motivated us to embark on this research to design and implement an integrated system for vehicle clearance and registration. This system complies with the new Treasury Single Account policy to enable proper interaction and collaboration among five different level agencies (NCS, FRSC, SBIR, VIO and NPF) saddled with vehicular administration and activities in Nigeria. Since the system is web based, Object Oriented Hypermedia Design Methodology (OOHDM) is used. Tools such as Php, JavaScript, css, html, AJAX and other web development technologies were used. The result is a web based system that gives proper information about a vehicle starting from the exact date of importation to registration and renewal of licensing. Vehicle owner information, custom duty information, plate number registration details, etc. will also be efficiently retrieved from the system by any of the agencies without contacting the other agency at any point in time. Also number plate will no longer be the only means of vehicle identification as it is presently the case in Nigeria, because the unified system will automatically generate and assigned a Unique Vehicle Identification Pin Number (UVIPN) on payment of duty in the system to the vehicle and the UVIPN will be linked to the various agencies in the management information system.
Assessment of the Efficiency of Customer Order Management System: A Case Stu...Editor IJCATR
The Supermarket Management System deals with the automation of buying and selling of good and services. It includes both sales and purchase of items. The project Supermarket Management System is to be developed with the objective of making the system reliable, easier, fast, and more informative.
Energy-Aware Routing in Wireless Sensor Network Using Modified Bi-Directional A*Editor IJCATR
Energy is a key component in the Wireless Sensor Network (WSN)[1]. The system will not be able to run according to its function without the availability of adequate power units. One of the characteristics of wireless sensor network is Limitation energy[2]. A lot of research has been done to develop strategies to overcome this problem. One of them is clustering technique. The popular clustering technique is Low Energy Adaptive Clustering Hierarchy (LEACH)[3]. In LEACH, clustering techniques are used to determine Cluster Head (CH), which will then be assigned to forward packets to Base Station (BS). In this research, we propose other clustering techniques, which utilize the Social Network Analysis approach theory of Betweeness Centrality (BC) which will then be implemented in the Setup phase. While in the Steady-State phase, one of the heuristic searching algorithms, Modified Bi-Directional A* (MBDA *) is implemented. The experiment was performed deploy 100 nodes statically in the 100x100 area, with one Base Station at coordinates (50,50). To find out the reliability of the system, the experiment to do in 5000 rounds. The performance of the designed routing protocol strategy will be tested based on network lifetime, throughput, and residual energy. The results show that BC-MBDA * is better than LEACH. This is influenced by the ways of working LEACH in determining the CH that is dynamic, which is always changing in every data transmission process. This will result in the use of energy, because they always doing any computation to determine CH in every transmission process. In contrast to BC-MBDA *, CH is statically determined, so it can decrease energy usage.
Security in Software Defined Networks (SDN): Challenges and Research Opportun...Editor IJCATR
In networks, the rapidly changing traffic patterns of search engines, Internet of Things (IoT) devices, Big Data and data centers has thrown up new challenges for legacy; existing networks; and prompted the need for a more intelligent and innovative way to dynamically manage traffic and allocate limited network resources. Software Defined Network (SDN) which decouples the control plane from the data plane through network vitalizations aims to address these challenges. This paper has explored the SDN architecture and its implementation with the OpenFlow protocol. It has also assessed some of its benefits over traditional network architectures, security concerns and how it can be addressed in future research and related works in emerging economies such as Nigeria.
Measure the Similarity of Complaint Document Using Cosine Similarity Based on...Editor IJCATR
Report handling on "LAPOR!" (Laporan, Aspirasi dan Pengaduan Online Rakyat) system depending on the system administrator who manually reads every incoming report [3]. Read manually can lead to errors in handling complaints [4] if the data flow is huge and grows rapidly, it needs at least three days to prepare a confirmation and it sensitive to inconsistencies [3]. In this study, the authors propose a model that can measure the identities of the Query (Incoming) with Document (Archive). The authors employed Class-Based Indexing term weighting scheme, and Cosine Similarities to analyse document similarities. CoSimTFIDF, CoSimTFICF and CoSimTFIDFICF values used in classification as feature for K-Nearest Neighbour (K-NN) classifier. The optimum result evaluation is pre-processing employ 75% of training data ratio and 25% of test data with CoSimTFIDF feature. It deliver a high accuracy 84%. The k = 5 value obtain high accuracy 84.12%
Hangul Recognition Using Support Vector MachineEditor IJCATR
The recognition of Hangul Image is more difficult compared with that of Latin. It could be recognized from the structural arrangement. Hangul is arranged from two dimensions while Latin is only from the left to the right. The current research creates a system to convert Hangul image into Latin text in order to use it as a learning material on reading Hangul. In general, image recognition system is divided into three steps. The first step is preprocessing, which includes binarization, segmentation through connected component-labeling method, and thinning with Zhang Suen to decrease some pattern information. The second is receiving the feature from every single image, whose identification process is done through chain code method. The third is recognizing the process using Support Vector Machine (SVM) with some kernels. It works through letter image and Hangul word recognition. It consists of 34 letters, each of which has 15 different patterns. The whole patterns are 510, divided into 3 data scenarios. The highest result achieved is 94,7% using SVM kernel polynomial and radial basis function. The level of recognition result is influenced by many trained data. Whilst the recognition process of Hangul word applies to the type 2 Hangul word with 6 different patterns. The difference of these patterns appears from the change of the font type. The chosen fonts for data training are such as Batang, Dotum, Gaeul, Gulim, Malgun Gothic. Arial Unicode MS is used to test the data. The lowest accuracy is achieved through the use of SVM kernel radial basis function, which is 69%. The same result, 72 %, is given by the SVM kernel linear and polynomial.
Application of 3D Printing in EducationEditor IJCATR
This paper provides a review of literature concerning the application of 3D printing in the education system. The review identifies that 3D Printing is being applied across the Educational levels [1] as well as in Libraries, Laboratories, and Distance education systems. The review also finds that 3D Printing is being used to teach both students and trainers about 3D Printing and to develop 3D Printing skills.
Survey on Energy-Efficient Routing Algorithms for Underwater Wireless Sensor ...Editor IJCATR
In underwater environment, for retrieval of information the routing mechanism is used. In routing mechanism there are three to four types of nodes are used, one is sink node which is deployed on the water surface and can collect the information, courier/super/AUV or dolphin powerful nodes are deployed in the middle of the water for forwarding the packets, ordinary nodes are also forwarder nodes which can be deployed from bottom to surface of the water and source nodes are deployed at the seabed which can extract the valuable information from the bottom of the sea. In underwater environment the battery power of the nodes is limited and that power can be enhanced through better selection of the routing algorithm. This paper focuses the energy-efficient routing algorithms for their routing mechanisms to prolong the battery power of the nodes. This paper also focuses the performance analysis of the energy-efficient algorithms under which we can examine the better performance of the route selection mechanism which can prolong the battery power of the node
Comparative analysis on Void Node Removal Routing algorithms for Underwater W...Editor IJCATR
The designing of routing algorithms faces many challenges in underwater environment like: propagation delay, acoustic channel behaviour, limited bandwidth, high bit error rate, limited battery power, underwater pressure, node mobility, localization 3D deployment, and underwater obstacles (voids). This paper focuses the underwater voids which affects the overall performance of the entire network. The majority of the researchers have used the better approaches for removal of voids through alternate path selection mechanism but still research needs improvement. This paper also focuses the architecture and its operation through merits and demerits of the existing algorithms. This research article further focuses the analytical method of the performance analysis of existing algorithms through which we found the better approach for removal of voids
Decay Property for Solutions to Plate Type Equations with Variable CoefficientsEditor IJCATR
In this paper we consider the initial value problem for a plate type equation with variable coefficients and memory in
1 n R n ), which is of regularity-loss property. By using spectrally resolution, we study the pointwise estimates in the spectral
space of the fundamental solution to the corresponding linear problem. Appealing to this pointwise estimates, we obtain the global
existence and the decay estimates of solutions to the semilinear problem by employing the fixed point theorem
The Indian economy is classified into different sectors to simplify the analysis and understanding of economic activities. For Class 10, it's essential to grasp the sectors of the Indian economy, understand their characteristics, and recognize their importance. This guide will provide detailed notes on the Sectors of the Indian Economy Class 10, using specific long-tail keywords to enhance comprehension.
For more information, visit-www.vavaclasses.com
The French Revolution, which began in 1789, was a period of radical social and political upheaval in France. It marked the decline of absolute monarchies, the rise of secular and democratic republics, and the eventual rise of Napoleon Bonaparte. This revolutionary period is crucial in understanding the transition from feudalism to modernity in Europe.
For more information, visit-www.vavaclasses.com
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdfTechSoup
In this webinar you will learn how your organization can access TechSoup's wide variety of product discount and donation programs. From hardware to software, we'll give you a tour of the tools available to help your nonprofit with productivity, collaboration, financial management, donor tracking, security, and more.
Palestine last event orientationfvgnh .pptxRaedMohamed3
An EFL lesson about the current events in Palestine. It is intended to be for intermediate students who wish to increase their listening skills through a short lesson in power point.
Synthetic Fiber Construction in lab .pptxPavel ( NSTU)
Synthetic fiber production is a fascinating and complex field that blends chemistry, engineering, and environmental science. By understanding these aspects, students can gain a comprehensive view of synthetic fiber production, its impact on society and the environment, and the potential for future innovations. Synthetic fibers play a crucial role in modern society, impacting various aspects of daily life, industry, and the environment. ynthetic fibers are integral to modern life, offering a range of benefits from cost-effectiveness and versatility to innovative applications and performance characteristics. While they pose environmental challenges, ongoing research and development aim to create more sustainable and eco-friendly alternatives. Understanding the importance of synthetic fibers helps in appreciating their role in the economy, industry, and daily life, while also emphasizing the need for sustainable practices and innovation.
ESC Beyond Borders _From EU to You_ InfoPack general.pdf
Non-Blind Deblurring Using Partial Differential Equation Method
1. International Journal of Computer Applications Technology and Research
Volume 2– Issue 3, 232 - 236, 2013
www.ijcat.com 232
Non-Blind Deblurring Using Partial Differential Equation
Method
Devender Sharma
CSE Department
HCE,Sonepat,
India.
Puneet Sharma
CSE Department
HCE,Sonepat,
India.
Ritu Sharma
ECE Department
BMIET,sonepat,
India.
Abstract: In this paper, a new idea for two dimensional image deblurring algorithm is introduced which uses basic concepts of PDEs...
The various methods to estimate the degradation function (PSF is known in prior called non-blind deblurring) for use in restoration are
observation, experimentation and mathematical modeling. Here, PDE based mathematical modeling is proposed to model the
degradation and recovery process. Several restoration methods such as Weiner Filtering, Inverse Filtering [1], Constrained Least
Squares, and Lucy -Richardson iteration remove the motion blur either using Fourier Transformation in frequency domain or by using
optimization techniques. The main difficulty with these methods is to estimate the deviation of the restored image from the original
image at individual points that is due to the mechanism of these methods as processing in frequency domain .Another method, the
travelling wave de-blurring method is a approach that works in spatial domain.PDE type of observation model describes well several
physical mechanisms, such as relative motion between the camera and the subject (motion blur), bad focusing (defocusing blur), or
a number of other mechanisms which are well modeled by a convolution. In last PDE method is compared with the existing
restoration techniques such as weiner filters, median filters [2] and the results are compared on the basis of calculated PSNR for
various noises
Keywords: PDE,PSF,Deblurring,Weiner filter
1. INTRODUCTION
Images are produced in order to record or display useful
information. Due to imperfections in the electronic or
photographic medium, the recorded image often represents a
degraded version of the original scene. The degradations may
have many causes, but two types of degradations are often
dominant: blurring and noise. The restoration and
enhancement of the blurred and noised images are of
fundamental importance in image processing applications.To
find the original image the degraded images has to be
deblurred. The field of image deblurring is concerned with the
reconstruction or restoration of the uncorrupted image from a
distorted and noisy one. The restoration (deblurring) of
images is an old problem in image processing, but it continues
to attract the attention of researchers and practitioners. A
number of real-world problems from astronomy to consumer
imaging find applications for image restoration algorithms.
Image restoration is an easily. visualized example of a larger
class of inverse problems that The degradation, of an image
can be caused by many factors. The movement during the
image captures process, by the camera or, when long exposure
times are used, by the subject. The out-of-focus optics, use of
a wide-angle lens, atmospheric turbulence, or a short exposure
time, which reduces the number of photons captured. The
confocal microscopy is an optical imaging technique. It
enables the reconstruction of 3-D structures from the obtained
images.An ideal camera or recording device would record an
image so that the intensity of a small piece (pixel) of the
recorded image was directly proportional to the intensity of
the corresponding section of the scene being recorded. The
real cameras violate this model in two ways. First, the
recorded intensity of a pixel is related to the intensity in a
larger neighborhood of the corresponding section of the scene.
This effect in visual images is called blurring. Second, the
recorded intensities are contaminated by random noise. The
Noise is a unwanted or undesirable information that
contaminates an image. Noise appears in images from a
variety of sources. First, the digital image acquisition process,
which converts an optical image into continuous electrical
signal that is then sampled, is the primary process by which
noise appears in digital image. The image noise is a random
variation of brightness or color information in images
produced by the camera. There are fluctuations caused by
natural phenomena that add a random value for a given pixel.
A blurred or degraded image can be approximately described
by this equation
k = H*f + n, (1)
Where the k is the blurred image, the H is the distortion
operator also called the point spread function(PSF), f is the
original true image, n is the additive noise, introduced during
2. International Journal of Computer Applications Technology and Research
Volume 2– Issue 3, 232 - 236, 2013
www.ijcat.com 233
image acquisition, that corrupts the image. The figure shown
represents the PSF , point spread function
Figure:1 Degradation in image by PSF
The degraded images are deblurred using the traditional
techniques .
1.1 Weiner Filter
The method is founded on considering image and noise as
random process and objective is to find an estimate of
deblurred image of the uncorrupted image such that mean
square error between them is minimized.The simplest
approach is to restore the original image simple by dividing
the transform of degraded image by degradation function.
F’(u,v)=F(u,v)+N(u,v)/H(u,v) (2)
These are the frequency transform of deblurred image,original
image,noise density and degraded function
1.2 Order Statistics Filters
These are the spatial filters [4] whose response is based on
ordering of the pixels contained in the image area and
compassed by the filter.The response of the filter at any point
is determined by ranking result.
F1(x,y)=median{g(s,t)} (3)
F1(x,y)=max{g(s,t)} (4)
F1(x,y)=mean{g(s,t)} (5)
2. PURPOSED METHOD
Image restoration is the pre-processing method that targets to
suppress degradation using knowledge about its nature.
Restoration attempts to recover an image that has been
degraded using a priori knowledge of the degradation
phenomenon. Hence, the restoration techniques are focussed
towards modelling the degradation and applying the inverse
process in order to recover the original image. The relative
motion between the camera and the object may lead to
blurring of image during its formation on the film of the
camera.The travelling wave de-blurring method is a approach
that works in spatial domain but the mathematical model
discussed in this paper is not generalized and discretization
issues and stability criteria of differential equation has not
been addressed. In fact, when the proposed differential
equation is discretized using forward differencing scheme is
unconditionally unstable which may not produce the desired
results. A generalized PDE [3] based image model is proposed
to model the phenomenon of blurred image formation due to
relative motion between camera and the object and further the
recovery of original image in spatial domain. Lax scheme is
used to discretize the resulting PDE which is mathematically
stable and produces good result. Therefore, with the use of
Lax method for discretizing the proposed PDE that was
initially a flux conservative equation transforms to a ID flux
conservative equation with an added diffusion term which is
in the form of Navier-Stokes equation. The, additional
diffusion term contributes towards further smoothing of
image. Let vector
n
X R ,
: n
f R R and
1 2 3
( , , ,...... )n
X x x x x
and f is a function of
X .
For 1D object
( )f X x and for 2D object i.e.
images
( ) ( , )f X x y . Let
V represents the velocity
vector of object and 1 2
( , ,......, )n
V v v v
.If object is
moving in horizontal direction only then velocity reads as
x
V v
and if object is under motion in XY-space in both
horizontal and vertical directions then velocity vector reads as
( , )x y
V v v
. If n-dimensional object
( )f X keeps a
linear uniform motion at a rate
V in n-Dim space under the
surveillance of a camera. The total exposure ( , )g X t at any
point of the recording medium(e.g., film) is obtained by
integrating the instantaneous exposure over the time interval
0<=t<=T during which camera shutter is open. After
discretization using Navier-Strokes equation, we get Observed
object for duration T can be modeled as –
0
( , ) ( )
T
g X t f X Vt dt
(6)
3. International Journal of Computer Applications Technology and Research
Volume 2– Issue 3, 232 - 236, 2013
www.ijcat.com 234
2
2
1
2
( )
2
( )n n
j j
gg
v t
x
x
g g
x
(7)
From above derived equation the PDE equation is
2
2
2
( )
2
( ) gg
It It v t
x
x
x
(8)
2.1 Algorithm for implementing vertical
deblurring:
The Algorithm for this scheme is as follows:-
1. Read the original image s of size mxn.
2. Introduce the motion blur in y direction to get s(y, x, t) or
we can directly have the blurred image s(y, x, t).
Id =s(y, x): Initial Image
3. Set dy=0. 1, dt = 0.1
4. for t=1: n iterations
Id = Id – (v∆t) +
// Evolves the sol. after n iterations end
5. Display the image
2.2 The Combined Deblurring Algorithm:-
1. Read the original image K of size mxn.
2. Filter the image K to Produce blurred version h(x,y) by
introducing motion in x-direction.
3. Filter K(x,y) to get final version K(x,y) by introducing
motion blur in y-direction K(x,y) is the final blurred image
with motion introduced in both x and y directions).
Initial Image I = K(x,y)
4. Set dx=0.1
t = 0.1, no_iterations=50, =1
5. For t=1: no_iterations
I=I-( ∆t) +
6. R=I
7. Set dy=0.1dt= 0.1, num_iterations=50, =1
8. For t=1: num_iterations
9. R=R-( ∆t) +
10. Get R and display as final deblurred Image
3. RESULTS
The blurring of images can be caused by movement of object
or camera while capturing the image. The deblurring of
Images is the reconstruction or restoration of the uncorrupted
image from a distorted and noisy one. In this paper, an idea
for two directional image deblurring algorithm is introduced
which uses basic concepts of PDEs having the prior
knowledge about the PSF. Motion Blurring is introduced in
two directions: horizontal and vertical. Then we proposed
PDEs based model for image deblurring considering both the
directions which is based on the mathematical model. A
simple two dimensional algorithm has been introduced and
implemented. The results show better quality of images by
applying this algorithm compared to the previously designed
techniques.The results are compared on the basis of PSNR
calculated for the several noises such as Gaussian noise, salt
and pepper noise, speckle noise etc. The deblurring is done for
the mean taken as 0 and variance is 0.001 for all the noises.
The results shown below for the Gaussian noise deblurred by
the various filters and is shown that PSNR is better for the
PDE method.
Figure: 2 Original image
Figure: 3 Image blurred in Y direction
4. International Journal of Computer Applications Technology and Research
Volume 2– Issue 3, 232 - 236, 2013
www.ijcat.com 235
Figure: 4 Image blurred in X direction
Figure: 5 Noise added in blurred image
Figure: 6 Deblurred image in Y-direction
Figure: 7 Deblurred image in X-direction
Figure: 8 Deblurred image in Y-direction
Figure:9 Deblurred image in X-direction
Figure:10 Deblurred image in Y-direction
Figure: 11 Deblurred image in X-direction
5. International Journal of Computer Applications Technology and Research
Volume 2– Issue 3, 232 - 236, 2013
www.ijcat.com 236
3.2 PSNR Table:
The PSNR based comparison is done among the different
techniques.PSNR Table is calculated for different techniques
and for several noises and is shown that PDE shows better
results.
Table1: PSNR calculation for different techniques.
Noise type Blurr Technique PSNR
Gaussian Vertical MedianFilters 28.1294
Gaussian Vertical Wiener Filter 14.9225
Gaussian Vertical PDE 40.1383
Impulse Vertical MedianFilters 36.9146
Impulse Vertical Wiener Filter 8.8892
Impulse Vertical PDE 24.855
Poisson Vertical MedianFilters 23.3081
Poisson Vertical Wiener Filter 9.7698
Poisson Vertical PDE 26.9374
Speckle Vertical MedianFilters 19.5736
Speckle Vertical Wiener Filter 7.2725
Speckle Vertical PDE 19.7322
4. ACKNOWLEDGMENTS
Ablend of gratitude, pleasure and great satisfaction is
what I feel to convey my indebtedness to all those who
directly or indirectly contributed to the successful
publication of this paper. I express my profound and
sincere gratitude to my Guide, Mr.Puneet Sharma, A.P
in CSE department, whose Persistence guidance and
support helped me in the successful completion of the
paper in stipulated time. His expert knowledge and
scholarly suggestion help me a lot. I am grateful to Mr.
Neeraj Gupta, HOD, CSE, HCE Sonepat for his
support. I am thankful to all my Professors and
Lecturers and members of the department for their
generous help in various ways for the completion of this
work.
5. REFERENCES
[1] M. Bertero and P. Boccacci,” Introduction to the
Inverse Problems in Imaging,” IOP Pub., Bristol, UK,
1998.
[2] Alliney, S.: Recursive median filters of increasing
order:variational approach. IEEE Transactions on
Signal rocessing 44(6), 1346–1354 (1996).
[3] Rajeev Srivastava, Harish Parthasarthy, JRP Gupta
and D. Roy Choudhary, “Image Restoration from
Motion Blurred Image using PDEs formalism”, IEEE
International Advance Computing Conference (IACC
2009),March 2009.
[4] www.google.com