The document presents a robust adaptive threshold algorithm based on kernel fuzzy clustering for image segmentation. It proposes using kernel fuzzy c-means clustering (KFCM) to generate adaptive thresholds for segmenting images. KFCM computes fuzzy membership values for pixels to cluster them. The algorithm was tested on MR brain images and showed good performance in detecting large and small objects while also enhancing low contrast images. Experimental results demonstrated the efficiency and accuracy of combining an adaptive threshold algorithm with KFCM for medical image segmentation.
A Hybrid Technique for the Automated Segmentation of Corpus Callosum in Midsa...IJERA Editor
The corpus callosum (CC) is the largest white-matter structure in human brain. In this paper, we take two techniques to observe the results of segmentation of Corpus Callosum. The first one is mean shift algorithm and morphological operation. The second one is k-means clustering. In this paper, it is performed in three steps. The first step is finding the corpus callosum area using adaptive mean shift algorithm or k-means clustering . In second step, the boundary of detected CC area is then used as the initial contour in the Geometric Active Contour (GAC) mode and final step to remove unknown noise using morphological operation and evolved to get the final segmentation result. The experimental results demonstrate that the mean shift algorithm and k-means clustering has provided a reliable segmentation performance.
Performance Analysis of CRT for Image Encryption ijcisjournal
With the fast advancements of information technology, the security of image data transmitted or stored over
internet is become very difficult. To hide the details, an effective method is encryption, so that only
authorized persons can decrypt the image with the keys available. Since the default features of digital
image such as high capacity data, large redundancy and large similarities among pixels, the conventional
encryption algorithms such as AES, , DES, 3DES, and Blow Fish, are not applicable for real time image
encryption. This paper presents the performance of CRT for image encryption to secure storage and
transmission of image over internet.
Copy Move Forgery Detection Using GLCM Based Statistical Features ijcisjournal
The features Gray Level Co-occurrence Matrix (GLCM) are mostly explored in Face Recognition and
CBIR. GLCM technique is explored here for Copy-Move Forgery Detection. GLCMs are extracted from all
the images in the database and statistics such as contrast, correlation, homogeneity and energy are
derived. These statistics form the feature vector. Support Vector Machine (SVM) is trained on all these
features and the authenticity of the image is decided by SVM classifier. The proposed work is evaluated on
CoMoFoD database, on a whole 1200 forged and processed images are tested. The performance analysis
of the present work is evaluated with the recent methods.
IMPROVED PARALLEL THINNING ALGORITHM TO OBTAIN UNIT-WIDTH SKELETONijma
To extract the creditable features in a fingerprint image, many people use a thinning algorithm that plays a
very important role in preprocessing. In this paper, we propose a robust parallel thinning algorithm that
can preserve the connectivity of the binarized fingerprint image, while making the thinnest skeleton of only
1-pixel wide, which gets extremely close to the medial axis. The proposed thinning method repeats three
sub-iterations. The first sub-iteration takes off only the outermost boundary pixel using the inner points. To
extract the one-sided skeletons, the second sub-iteration seeks the skeletons with a 2-pixel width. The third
sub-iteration prunes the needless pixels with a 2-pixel width existing in the obtained skeletons. The
proposed thinning algorithm shows robustness against rotation and noise and makes the balanced medial
axis. To evaluate the performance of the proposed thinning algorithm, we compare it with and analyze
previous algorithms.
Data Hiding Method With High Embedding Capacity CharacterCSCJournals
Recently, the data hiding method based on the high embedding capacity by using improved EMD method was proposed by Kuo et al.[6]. They claimed that their scheme can not only hide a great deal of secret data but also keep high safety and good image quality. However, in their scheme, the sender and the receiver must share the synchronous random secret seed before they transmit the stego-image each other. Otherwise, they can not recover the correct secret information from the stego-image. In this paper we propose an improved scheme based on EMD and LSB matching method to overcome the above problem, in other words, the sender does not share the synchronous random secret seed the receiver before the stego-image is transmitted. Observing the experimental results, they show that our proposed scheme acquires high embedding capacity and acceptable stego-image quality.
Fast Segmentation of Sub-cellular OrganellesCSCJournals
Segmentation and counting sub-cellular structure is a very challenging problem even for medical experts. A fast and efficient method for segmentation and counting of sub-cellular structure is proposed. The proposed method uses a hybrid combination of several image processing techniques and is effective in segmenting the sub-cellular structures in a fast and effective manner.
Geometric Correction for Braille Document Images csandit
Image processing is an important research area in computer vision. clustering is an unsupervised
study. clustering can also be used for image segmentation. there exist so many methods for image
segmentation. image segmentation plays an important role in image analysis.it is one of the first
and the most important tasks in image analysis and computer vision. this proposed system
presents a variation of fuzzy c-means algorithm that provides image clustering. the kernel fuzzy
c-means clustering algorithm (kfcm) is derived from the fuzzy c-means clustering
algorithm(fcm).the kfcm algorithm that provides image clustering and improves accuracy
significantly compared with classical fuzzy c-means algorithm. the new algorithm is called
gaussian kernel based fuzzy c-means clustering algorithm (gkfcm)the major characteristic of
gkfcm is the use of a fuzzy clustering approach ,aiming to guarantee noise insensitiveness and
image detail preservation.. the objective of the work is to cluster the low intensity in homogeneity
area from the noisy images, using the clustering method, segmenting that portion separately using
content level set approach. the purpose of designing this system is to produce better segmentation
results for images corrupted by noise, so that it can be useful in various fields like medical image
analysis, such as tumor detection, study of anatomical structure, and treatment planning.
A Hybrid Technique for the Automated Segmentation of Corpus Callosum in Midsa...IJERA Editor
The corpus callosum (CC) is the largest white-matter structure in human brain. In this paper, we take two techniques to observe the results of segmentation of Corpus Callosum. The first one is mean shift algorithm and morphological operation. The second one is k-means clustering. In this paper, it is performed in three steps. The first step is finding the corpus callosum area using adaptive mean shift algorithm or k-means clustering . In second step, the boundary of detected CC area is then used as the initial contour in the Geometric Active Contour (GAC) mode and final step to remove unknown noise using morphological operation and evolved to get the final segmentation result. The experimental results demonstrate that the mean shift algorithm and k-means clustering has provided a reliable segmentation performance.
Performance Analysis of CRT for Image Encryption ijcisjournal
With the fast advancements of information technology, the security of image data transmitted or stored over
internet is become very difficult. To hide the details, an effective method is encryption, so that only
authorized persons can decrypt the image with the keys available. Since the default features of digital
image such as high capacity data, large redundancy and large similarities among pixels, the conventional
encryption algorithms such as AES, , DES, 3DES, and Blow Fish, are not applicable for real time image
encryption. This paper presents the performance of CRT for image encryption to secure storage and
transmission of image over internet.
Copy Move Forgery Detection Using GLCM Based Statistical Features ijcisjournal
The features Gray Level Co-occurrence Matrix (GLCM) are mostly explored in Face Recognition and
CBIR. GLCM technique is explored here for Copy-Move Forgery Detection. GLCMs are extracted from all
the images in the database and statistics such as contrast, correlation, homogeneity and energy are
derived. These statistics form the feature vector. Support Vector Machine (SVM) is trained on all these
features and the authenticity of the image is decided by SVM classifier. The proposed work is evaluated on
CoMoFoD database, on a whole 1200 forged and processed images are tested. The performance analysis
of the present work is evaluated with the recent methods.
IMPROVED PARALLEL THINNING ALGORITHM TO OBTAIN UNIT-WIDTH SKELETONijma
To extract the creditable features in a fingerprint image, many people use a thinning algorithm that plays a
very important role in preprocessing. In this paper, we propose a robust parallel thinning algorithm that
can preserve the connectivity of the binarized fingerprint image, while making the thinnest skeleton of only
1-pixel wide, which gets extremely close to the medial axis. The proposed thinning method repeats three
sub-iterations. The first sub-iteration takes off only the outermost boundary pixel using the inner points. To
extract the one-sided skeletons, the second sub-iteration seeks the skeletons with a 2-pixel width. The third
sub-iteration prunes the needless pixels with a 2-pixel width existing in the obtained skeletons. The
proposed thinning algorithm shows robustness against rotation and noise and makes the balanced medial
axis. To evaluate the performance of the proposed thinning algorithm, we compare it with and analyze
previous algorithms.
Data Hiding Method With High Embedding Capacity CharacterCSCJournals
Recently, the data hiding method based on the high embedding capacity by using improved EMD method was proposed by Kuo et al.[6]. They claimed that their scheme can not only hide a great deal of secret data but also keep high safety and good image quality. However, in their scheme, the sender and the receiver must share the synchronous random secret seed before they transmit the stego-image each other. Otherwise, they can not recover the correct secret information from the stego-image. In this paper we propose an improved scheme based on EMD and LSB matching method to overcome the above problem, in other words, the sender does not share the synchronous random secret seed the receiver before the stego-image is transmitted. Observing the experimental results, they show that our proposed scheme acquires high embedding capacity and acceptable stego-image quality.
Fast Segmentation of Sub-cellular OrganellesCSCJournals
Segmentation and counting sub-cellular structure is a very challenging problem even for medical experts. A fast and efficient method for segmentation and counting of sub-cellular structure is proposed. The proposed method uses a hybrid combination of several image processing techniques and is effective in segmenting the sub-cellular structures in a fast and effective manner.
Geometric Correction for Braille Document Images csandit
Image processing is an important research area in computer vision. clustering is an unsupervised
study. clustering can also be used for image segmentation. there exist so many methods for image
segmentation. image segmentation plays an important role in image analysis.it is one of the first
and the most important tasks in image analysis and computer vision. this proposed system
presents a variation of fuzzy c-means algorithm that provides image clustering. the kernel fuzzy
c-means clustering algorithm (kfcm) is derived from the fuzzy c-means clustering
algorithm(fcm).the kfcm algorithm that provides image clustering and improves accuracy
significantly compared with classical fuzzy c-means algorithm. the new algorithm is called
gaussian kernel based fuzzy c-means clustering algorithm (gkfcm)the major characteristic of
gkfcm is the use of a fuzzy clustering approach ,aiming to guarantee noise insensitiveness and
image detail preservation.. the objective of the work is to cluster the low intensity in homogeneity
area from the noisy images, using the clustering method, segmenting that portion separately using
content level set approach. the purpose of designing this system is to produce better segmentation
results for images corrupted by noise, so that it can be useful in various fields like medical image
analysis, such as tumor detection, study of anatomical structure, and treatment planning.
Image confusion and diffusion based on multi-chaotic system and mix-columnjournalBEEI
In this paper, a new image encryption algorithm based on chaotic cryptography was proposed. The proposed scheme was based on multiple stages of confusion and diffusion. The diffusion process was implemented twice, first, by permuting the pixels of the plain image by using an Arnold cat map and, the second time by permuting the plain image pixels via the proposed permutation algorithm. The confusion process was performed many times, by performing the XOR operation between the two resulted from permuted images, subtracted a random value from all pixels of the image, as well as by implementing the mix column on the resulted image, and by used the Lorenz key to obtain the encrypted image. The security analysis tests that used to exam the results of this encryption method were information entropy, key space analysis, correlation, histogram analysis UACI, and NPCR have shown that the suggested algorithm has been resistant against different types of attacks.
Robust foreground modelling to segment and detect multiple moving objects in ...IJECEIAES
Last decade has witnessed an ever increasing number of video surveillance installa- tions due to the rise of security concerns worldwide. With this comes the need for video analysis for fraud detection, crime investigation, traffic monitoring to name a few. For any kind of video analysis application, detection of moving objects in videos is a fundamental step. In this paper, an efficient foreground modelling method to segment multiple moving objects is implemented. Proposed method significantly reduces noise thereby accurately segmenting region of interest under dynamic conditions while handling occlusion to a large extent. Extensive performance analysis shows that the proposed method was found to give far better results when compared to the de facto standard as well as relatively new approaches used for moving object detection.
New Approach of Preprocessing For Numeral RecognitionIJERA Editor
The present paper proposes a new approach of preprocessing for handwritten, printed and isolated numeral
characters. The new approach reduces the size of the input image of each numeral by discarding the redundant
information. This method reduces also the number of features of the attribute vector provided by the extraction
features method. Numeral recognition is carried out in this work through k nearest neighbors and multilayer
perceptron techniques. The simulations have obtained a good rate of recognition in fewer running time.
Optimization of Macro Block Size for Adaptive Rood Pattern Search Block Match...IJERA Editor
In area of video compression, Motion Estimation is one of the most important modules and play an important role
to design and implementation of any the video encoder. It consumes more than 85% of video encoding time due to
searching of a candidate block in the search window of the reference frame. Various block matching methods have
been developed to minimize the search time. In this context, Adaptive Rood Pattern Search is one of the less
expensive block matching methods, which is widely acceptable for better Motion Estimation in video data
processing. In this paper we have proposed to optimize the macro block size used in adaptive rood pattern search
method for improvement in motion estimation.
Black-box modeling of nonlinear system using evolutionary neural NARX modelIJECEIAES
Nonlinear systems with uncertainty and disturbance are very difficult to model using mathematic approach. Therefore, a black-box modeling approach without any prior knowledge is necessary. There are some modeling approaches have been used to develop a black box model such as fuzzy logic, neural network, and evolution algorithms. In this paper, an evolutionary neural network by combining a neural network and a modified differential evolution algorithm is applied to model a nonlinear system. The feasibility and effectiveness of the proposed modeling are tested on a piezoelectric actuator SISO system and an experimental quadruple tank MIMO system.
Adaptive threshold for moving objects detection using gaussian mixture modelTELKOMNIKA JOURNAL
Moving object detection becomes the important task in the video surveilance system. Defining the threshold automatically is challenging to differentiate the moving object from the background within a video. This study proposes gaussian mixture model (GMM) as a threshold strategy in moving object detection. The performance of the proposed method is compared to the Otsu algorithm and gray threshold as the baseline method using mean square error (MSE) and Peak Signal Noise Ratio (PSNR). The performance comparison of the methods is evaluated on human video dataset. The average result of MSE value GMM is 257.18, Otsu is 595.36 and Gray is 645.39, so the MSE value is lower than Otsu and Gray threshold. The average result of PSNR value GMM is 24.71, Otsu is 20.66 and Gray is 19.35, so the PSNR value is higher than Otsu and Gray threshold. The performance of the proposed method outperforms the baseline method in term of error detection.
TERRIAN IDENTIFICATION USING CO-CLUSTERED MODEL OF THE SWARM INTELLEGENCE & S...cscpconf
A digital image is nothing more than data -- numbers indicating variations of red, green, and
blue at a particular location on a grid of pixels. Clustering is the process of assigning data
objects into a set of disjoint groups called clusters so that objects in each cluster are more
similar to each other than objects from different clusters. Clustering techniques are applied in
many application areas such as pattern recognition, data mining, machine learning, etc.
Clustering algorithms can be broadly classified as Hard, Fuzzy, Possibility, and Probabilistic .Kmeans
is one of the most popular hard clustering algorithms which partitions data objects into k
clusters where the number of clusters, k, is decided in advance according to application
purposes. This model is inappropriate for real data sets in which there are no definite boundaries
between the clusters. After the fuzzy theory introduced by Lotfi Zadeh, the researchers put the
fuzzy theory into clustering. Fuzzy algorithms can assign data object partially to multiple
clusters. The degree of membership in the fuzzy clusters depends on the closeness of the data
object to the cluster centers. The most popular fuzzy clustering algorithm is fuzzy c-means (FCM)
which introduced by Bezdek in 1974 and now it is widely used. Fuzzy c-means clustering is an
effective algorithm, but the random selection in center points makes iterative process falling into
the local optimal solution easily. For solving this problem, recently evolutionary algorithms such
as genetic algorithm (GA), simulated annealing (SA), ant colony optimization (ACO) , and particle swarm optimization (PSO) have been successfully applied.
A ROBUST BACKGROUND REMOVAL ALGORTIHMS USING FUZZY C-MEANS CLUSTERINGIJNSA Journal
Background subtraction is typically one of the first steps carried out in motion detection using static video cameras. This paper presents a novel method for background removal that processes only some pixels of each image. Some regions of interest of the objects in the image or frame are located with the help of edge detector. Once the region is detected only that area will be segmented instead of processing the whole image. This method achieves a significant reduction in computation time that can be used for subsequent image analysis. In this paper we detect the foreground object with the help of edge detector and combine the Fuzzy c-means clustering algorithm to segment the object by means of subtracting the current frame from the previous frame, the accurate background is identified.
Colour Image Segmentation Using Soft Rough Fuzzy-C-Means and Multi Class SVM ijcisjournal
Color image segmentation algorithms in the literature segment an image on the basis of color, texture, and
also as a fusion of both color and texture. In this paper, a color image segmentation algorithm is proposed
by extracting both texture and color features and applying them to the One-Against-All Multi Class Support
Vector Machine classifier for segmentation. A novel Power Law Descriptor (PLD) is used for extracting
the textural features and homogeneity model is used for obtaining the color features. The Multi Class SVM
is trained using the samples obtained from Soft Rough Fuzzy-C-Means (SRFCM) clustering. Fuzzy set
based membership functions capably handle the problem of overlapping clusters. The lower and upper
approximation concepts of rough sets deal well with uncertainty, vagueness, and incompleteness in data.
Parameterization tools are not a prerequisite in defining Soft set theory. The goodness aspects of soft sets,
rough sets and fuzzy sets are incorporated in the proposed algorithm to achieve improved segmentation
performance. The Power Law Descriptor used for texture feature extraction has the advantage of being
dealt in the spatial domain thereby reducing computational complexity. The proposed algorithm is
comparable and achieved better performance compared with the state of the art algorithms found in the
literature.
3-D FFT Moving Object Signatures for Velocity FilteringIDES Editor
In this paper a bank of velocity filters is devised to
be used for isolating a moving object with specific velocity
(amplitude and direction) in a sequence of frames. The
approach used is a 3-D FFT based experimental procedure
without applying any theoretical concept from velocity filters.
Accordingly, each velocity filter is built using the spectral
signature of an object moving with specific velocity.
Experimentation reveals the capabilities of the constructed
filter bank to separate moving objects as far as the amplitude
as well as the direction of the velocity are concerned.
Accordingly, weak objects can be detected when moving with
different velocity close to strong vehicles. Accelerating objects
can be detected only on the part of their trajectory they have
the specific velocity. Problems which arise due to the
discontinuities at the edges of the frame sequences are
discussed.
The study evaluates three background subtraction techniques. The techniques ranges from very basic
algorithm to state of the art published techniques categorized based on speed, memory requirements and
accuracy. Such a review can effectively guide the designer to select the most suitable method for a given
application in a principled way. The algorithms used in the study ranges from varying levels of accuracy
and computational complexity. Few of them can also deal with real time challenges like rain, snow, hails,
swaying branches, objects overlapping, varying light intensity or slow moving objects.
FUZZY IMAGE SEGMENTATION USING VALIDITY INDEXES CORRELATIONijcsit
This paper introduces an algorithm for image segmentation using a clustering technique; the technique is
based on the fuzzy c means algorithm (FCM) that is executed iteratively with different number of clusters.
Furthermore, simultaneously five validity indexes are calculated and their information is correlated to
determine the optimal number of clusters in order to segment an image, results and simulations are shown
in the paper.
Image Compression based on DCT and BPSO for MRI and Standard ImagesIJERA Editor
Nowadays, digital image compression has become a crucial factor of modern telecommunication systems. Image compression is the process of reducing total bits required to represent an image by reducing redundancies while preserving the image quality as much as possible. Various applications including internet, multimedia, satellite imaging, medical imaging uses image compression in order to store and transmit images in an efficient manner. Selection of compression technique is an application-specific process. In this paper, an improved compression technique based on Butterfly-Particle Swarm Optimization (BPSO) is proposed. BPSO is an intelligence-based iterative algorithm utilized for finding optimal solution from a set of possible values. The dominant factors of BPSO over other optimization techniques are higher convergence rate, searching ability and overall performance. The proposed technique divides the input image into 88 blocks. Discrete Cosine Transform (DCT) is applied to each block to obtain the coefficients. Then, the threshold values are obtained from BPSO. Based on this threshold, values of the coefficients are modified. Finally, quantization followed by the Huffman encoding is used to encode the image. Experimental results show the effectiveness of the proposed method over the existing method.
The fourier transform for satellite image compressioncsandit
The need to transmit or store satellite images is growing rapidly with the development of
modern communications and new imaging systems. The goal of compression is to facilitate the
storage and transmission of large images on the ground with high compression ratios and
minimum distortion. In this work, we present a new coding scheme for satellite images. At first,
the image will be downloaded followed by a fast Fourier transform FFT. The result obtained
after FFT processing undergoes a scalar quantization (SQ). The results obtained after the
quantization phase are encoded using entropy encoding. This approach has been tested on
satellite image and Lena picture. After decompression, the images were reconstructed faithfully
and memory space required for storage has been reduced by more than 80%
A CONCERT EVALUATION OF EXEMPLAR BASED IMAGE INPAINTING ALGORITHMS FOR NATURA...cscpconf
Image inpainting derives from restoration of art works, and has been applied to repair ancient
art works. Inpainting is a technique of restoring a partially damaged or occluded image in an
undetectable way. It fills the damaged part of an image by employing information of the
undamaged part according to some rules to make it look “reasonable” to human eyes. Digital
image inpainting is relatively new area of research, but numerous and different approaches to
tackle the inpainting problem have been proposed since the concept was first introduced. This
paper analyzes and compares the recent exemplar based inpainting algorithms by Minqin Wang
and Hao Guo et al. A number of examples on real images are demonstrated to evaluate the
results of algorithms using Peak Signal to Noise Ratio (PSNR)
Background Estimation Using Principal Component Analysis Based on Limited Mem...IJECEIAES
Given a video of 푀 frames of size ℎ × 푤. Background components of a video are the elements matrix which relative constant over 푀 frames. In PCA (principal component analysis) method these elements are referred as “principal components”. In video processing, background subtraction means excision of background component from the video. PCA method is used to get the background component. This method transforms 3 dimensions video (ℎ × 푤 × 푀) into 2 dimensions one (푁 × 푀), where 푁 is a linear array of size ℎ × 푤 . The principal components are the dominant eigenvectors which are the basis of an eigenspace. The limited memory block Krylov subspace optimization then is proposed to improve performance the computation. Background estimation is obtained as the projection each input image (the first frame at each sequence image) onto space expanded principal component. The procedure was run for the standard dataset namely SBI (Scene Background Initialization) dataset consisting of 8 videos with interval resolution [146 150, 352 240], total frame [258,500]. The performances are shown with 8 metrics, especially (in average for 8 videos) percentage of error pixels (0.24%), the percentage of clustered error pixels (0.21%), multiscale structural similarity index (0.88 form maximum 1), and running time (61.68 seconds).
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
GAUSSIAN KERNEL BASED FUZZY C-MEANS CLUSTERING ALGORITHM FOR IMAGE SEGMENTATIONcscpconf
Image processing is an important research area in computer vision. clustering is an unsupervised study. clustering can also be used for image segmentation. there exist so many methods for image segmentation. image segmentation plays an important role in image analysis.it is one of the first and the most important tasks in image analysis and computer vision. this proposed system presents a variation of fuzzy c-means algorithm that provides image clustering. the kernel fuzzy
c-means clustering algorithm (kfcm) is derived from the fuzzy c-means clustering algorithm(fcm).the kfcm algorithm that provides image clustering and improves accuracy significantly compared with classical fuzzy c-means algorithm. the new algorithm is called gaussian kernel based fuzzy c-means clustering algorithm (gkfcm)the major characteristic of gkfcm is the use of a fuzzy clustering approach ,aiming to guarantee noise insensitiveness and image detail preservation.. the objective of the work is to cluster the low intensity in homogeneity area from the noisy images, using the clustering method, segmenting that portion separately using content level set approach. the purpose of designing this system is to produce better segmentation results for images corrupted by noise, so that it can be useful in various fields like medical image analysis, such as tumor detection, study of anatomical structure, and treatment planning.
GAUSSIAN KERNEL BASED FUZZY C-MEANS CLUSTERING ALGORITHM FOR IMAGE SEGMENTATIONcsandit
Image processing is an important research area in computer vision. clustering is an unsupervised
study. clustering can also be used for image segmentation. there exist so many methods for image
segmentation. image segmentation plays an important role in image analysis.it is one of the first
and the most important tasks in image analysis and computer vision. this proposed system
presents a variation of fuzzy c-means algorithm that provides image clustering. the kernel fuzzy
c-means clustering algorithm (kfcm) is derived from the fuzzy c-means clustering
algorithm(fcm).the kfcm algorithm that provides image clustering and improves accuracy
significantly compared with classical fuzzy c-means algorithm. the new algorithm is called
gaussian kernel based fuzzy c-means clustering algorithm (gkfcm)the major characteristic of
gkfcm is the use of a fuzzy clustering approach ,aiming to guarantee noise insensitiveness and
image detail preservation.. the objective of the work is to cluster the low intensity in homogeneity
area from the noisy images, using the clustering method, segmenting that portion separately using
content level set approach. the purpose of designing this system is to produce better segmentation
results for images corrupted by noise, so that it can be useful in various fields like medical image
analysis, such as tumor detection, study of anatomical structure, and treatment planning.
Image confusion and diffusion based on multi-chaotic system and mix-columnjournalBEEI
In this paper, a new image encryption algorithm based on chaotic cryptography was proposed. The proposed scheme was based on multiple stages of confusion and diffusion. The diffusion process was implemented twice, first, by permuting the pixels of the plain image by using an Arnold cat map and, the second time by permuting the plain image pixels via the proposed permutation algorithm. The confusion process was performed many times, by performing the XOR operation between the two resulted from permuted images, subtracted a random value from all pixels of the image, as well as by implementing the mix column on the resulted image, and by used the Lorenz key to obtain the encrypted image. The security analysis tests that used to exam the results of this encryption method were information entropy, key space analysis, correlation, histogram analysis UACI, and NPCR have shown that the suggested algorithm has been resistant against different types of attacks.
Robust foreground modelling to segment and detect multiple moving objects in ...IJECEIAES
Last decade has witnessed an ever increasing number of video surveillance installa- tions due to the rise of security concerns worldwide. With this comes the need for video analysis for fraud detection, crime investigation, traffic monitoring to name a few. For any kind of video analysis application, detection of moving objects in videos is a fundamental step. In this paper, an efficient foreground modelling method to segment multiple moving objects is implemented. Proposed method significantly reduces noise thereby accurately segmenting region of interest under dynamic conditions while handling occlusion to a large extent. Extensive performance analysis shows that the proposed method was found to give far better results when compared to the de facto standard as well as relatively new approaches used for moving object detection.
New Approach of Preprocessing For Numeral RecognitionIJERA Editor
The present paper proposes a new approach of preprocessing for handwritten, printed and isolated numeral
characters. The new approach reduces the size of the input image of each numeral by discarding the redundant
information. This method reduces also the number of features of the attribute vector provided by the extraction
features method. Numeral recognition is carried out in this work through k nearest neighbors and multilayer
perceptron techniques. The simulations have obtained a good rate of recognition in fewer running time.
Optimization of Macro Block Size for Adaptive Rood Pattern Search Block Match...IJERA Editor
In area of video compression, Motion Estimation is one of the most important modules and play an important role
to design and implementation of any the video encoder. It consumes more than 85% of video encoding time due to
searching of a candidate block in the search window of the reference frame. Various block matching methods have
been developed to minimize the search time. In this context, Adaptive Rood Pattern Search is one of the less
expensive block matching methods, which is widely acceptable for better Motion Estimation in video data
processing. In this paper we have proposed to optimize the macro block size used in adaptive rood pattern search
method for improvement in motion estimation.
Black-box modeling of nonlinear system using evolutionary neural NARX modelIJECEIAES
Nonlinear systems with uncertainty and disturbance are very difficult to model using mathematic approach. Therefore, a black-box modeling approach without any prior knowledge is necessary. There are some modeling approaches have been used to develop a black box model such as fuzzy logic, neural network, and evolution algorithms. In this paper, an evolutionary neural network by combining a neural network and a modified differential evolution algorithm is applied to model a nonlinear system. The feasibility and effectiveness of the proposed modeling are tested on a piezoelectric actuator SISO system and an experimental quadruple tank MIMO system.
Adaptive threshold for moving objects detection using gaussian mixture modelTELKOMNIKA JOURNAL
Moving object detection becomes the important task in the video surveilance system. Defining the threshold automatically is challenging to differentiate the moving object from the background within a video. This study proposes gaussian mixture model (GMM) as a threshold strategy in moving object detection. The performance of the proposed method is compared to the Otsu algorithm and gray threshold as the baseline method using mean square error (MSE) and Peak Signal Noise Ratio (PSNR). The performance comparison of the methods is evaluated on human video dataset. The average result of MSE value GMM is 257.18, Otsu is 595.36 and Gray is 645.39, so the MSE value is lower than Otsu and Gray threshold. The average result of PSNR value GMM is 24.71, Otsu is 20.66 and Gray is 19.35, so the PSNR value is higher than Otsu and Gray threshold. The performance of the proposed method outperforms the baseline method in term of error detection.
TERRIAN IDENTIFICATION USING CO-CLUSTERED MODEL OF THE SWARM INTELLEGENCE & S...cscpconf
A digital image is nothing more than data -- numbers indicating variations of red, green, and
blue at a particular location on a grid of pixels. Clustering is the process of assigning data
objects into a set of disjoint groups called clusters so that objects in each cluster are more
similar to each other than objects from different clusters. Clustering techniques are applied in
many application areas such as pattern recognition, data mining, machine learning, etc.
Clustering algorithms can be broadly classified as Hard, Fuzzy, Possibility, and Probabilistic .Kmeans
is one of the most popular hard clustering algorithms which partitions data objects into k
clusters where the number of clusters, k, is decided in advance according to application
purposes. This model is inappropriate for real data sets in which there are no definite boundaries
between the clusters. After the fuzzy theory introduced by Lotfi Zadeh, the researchers put the
fuzzy theory into clustering. Fuzzy algorithms can assign data object partially to multiple
clusters. The degree of membership in the fuzzy clusters depends on the closeness of the data
object to the cluster centers. The most popular fuzzy clustering algorithm is fuzzy c-means (FCM)
which introduced by Bezdek in 1974 and now it is widely used. Fuzzy c-means clustering is an
effective algorithm, but the random selection in center points makes iterative process falling into
the local optimal solution easily. For solving this problem, recently evolutionary algorithms such
as genetic algorithm (GA), simulated annealing (SA), ant colony optimization (ACO) , and particle swarm optimization (PSO) have been successfully applied.
A ROBUST BACKGROUND REMOVAL ALGORTIHMS USING FUZZY C-MEANS CLUSTERINGIJNSA Journal
Background subtraction is typically one of the first steps carried out in motion detection using static video cameras. This paper presents a novel method for background removal that processes only some pixels of each image. Some regions of interest of the objects in the image or frame are located with the help of edge detector. Once the region is detected only that area will be segmented instead of processing the whole image. This method achieves a significant reduction in computation time that can be used for subsequent image analysis. In this paper we detect the foreground object with the help of edge detector and combine the Fuzzy c-means clustering algorithm to segment the object by means of subtracting the current frame from the previous frame, the accurate background is identified.
Colour Image Segmentation Using Soft Rough Fuzzy-C-Means and Multi Class SVM ijcisjournal
Color image segmentation algorithms in the literature segment an image on the basis of color, texture, and
also as a fusion of both color and texture. In this paper, a color image segmentation algorithm is proposed
by extracting both texture and color features and applying them to the One-Against-All Multi Class Support
Vector Machine classifier for segmentation. A novel Power Law Descriptor (PLD) is used for extracting
the textural features and homogeneity model is used for obtaining the color features. The Multi Class SVM
is trained using the samples obtained from Soft Rough Fuzzy-C-Means (SRFCM) clustering. Fuzzy set
based membership functions capably handle the problem of overlapping clusters. The lower and upper
approximation concepts of rough sets deal well with uncertainty, vagueness, and incompleteness in data.
Parameterization tools are not a prerequisite in defining Soft set theory. The goodness aspects of soft sets,
rough sets and fuzzy sets are incorporated in the proposed algorithm to achieve improved segmentation
performance. The Power Law Descriptor used for texture feature extraction has the advantage of being
dealt in the spatial domain thereby reducing computational complexity. The proposed algorithm is
comparable and achieved better performance compared with the state of the art algorithms found in the
literature.
3-D FFT Moving Object Signatures for Velocity FilteringIDES Editor
In this paper a bank of velocity filters is devised to
be used for isolating a moving object with specific velocity
(amplitude and direction) in a sequence of frames. The
approach used is a 3-D FFT based experimental procedure
without applying any theoretical concept from velocity filters.
Accordingly, each velocity filter is built using the spectral
signature of an object moving with specific velocity.
Experimentation reveals the capabilities of the constructed
filter bank to separate moving objects as far as the amplitude
as well as the direction of the velocity are concerned.
Accordingly, weak objects can be detected when moving with
different velocity close to strong vehicles. Accelerating objects
can be detected only on the part of their trajectory they have
the specific velocity. Problems which arise due to the
discontinuities at the edges of the frame sequences are
discussed.
The study evaluates three background subtraction techniques. The techniques ranges from very basic
algorithm to state of the art published techniques categorized based on speed, memory requirements and
accuracy. Such a review can effectively guide the designer to select the most suitable method for a given
application in a principled way. The algorithms used in the study ranges from varying levels of accuracy
and computational complexity. Few of them can also deal with real time challenges like rain, snow, hails,
swaying branches, objects overlapping, varying light intensity or slow moving objects.
FUZZY IMAGE SEGMENTATION USING VALIDITY INDEXES CORRELATIONijcsit
This paper introduces an algorithm for image segmentation using a clustering technique; the technique is
based on the fuzzy c means algorithm (FCM) that is executed iteratively with different number of clusters.
Furthermore, simultaneously five validity indexes are calculated and their information is correlated to
determine the optimal number of clusters in order to segment an image, results and simulations are shown
in the paper.
Image Compression based on DCT and BPSO for MRI and Standard ImagesIJERA Editor
Nowadays, digital image compression has become a crucial factor of modern telecommunication systems. Image compression is the process of reducing total bits required to represent an image by reducing redundancies while preserving the image quality as much as possible. Various applications including internet, multimedia, satellite imaging, medical imaging uses image compression in order to store and transmit images in an efficient manner. Selection of compression technique is an application-specific process. In this paper, an improved compression technique based on Butterfly-Particle Swarm Optimization (BPSO) is proposed. BPSO is an intelligence-based iterative algorithm utilized for finding optimal solution from a set of possible values. The dominant factors of BPSO over other optimization techniques are higher convergence rate, searching ability and overall performance. The proposed technique divides the input image into 88 blocks. Discrete Cosine Transform (DCT) is applied to each block to obtain the coefficients. Then, the threshold values are obtained from BPSO. Based on this threshold, values of the coefficients are modified. Finally, quantization followed by the Huffman encoding is used to encode the image. Experimental results show the effectiveness of the proposed method over the existing method.
The fourier transform for satellite image compressioncsandit
The need to transmit or store satellite images is growing rapidly with the development of
modern communications and new imaging systems. The goal of compression is to facilitate the
storage and transmission of large images on the ground with high compression ratios and
minimum distortion. In this work, we present a new coding scheme for satellite images. At first,
the image will be downloaded followed by a fast Fourier transform FFT. The result obtained
after FFT processing undergoes a scalar quantization (SQ). The results obtained after the
quantization phase are encoded using entropy encoding. This approach has been tested on
satellite image and Lena picture. After decompression, the images were reconstructed faithfully
and memory space required for storage has been reduced by more than 80%
A CONCERT EVALUATION OF EXEMPLAR BASED IMAGE INPAINTING ALGORITHMS FOR NATURA...cscpconf
Image inpainting derives from restoration of art works, and has been applied to repair ancient
art works. Inpainting is a technique of restoring a partially damaged or occluded image in an
undetectable way. It fills the damaged part of an image by employing information of the
undamaged part according to some rules to make it look “reasonable” to human eyes. Digital
image inpainting is relatively new area of research, but numerous and different approaches to
tackle the inpainting problem have been proposed since the concept was first introduced. This
paper analyzes and compares the recent exemplar based inpainting algorithms by Minqin Wang
and Hao Guo et al. A number of examples on real images are demonstrated to evaluate the
results of algorithms using Peak Signal to Noise Ratio (PSNR)
Background Estimation Using Principal Component Analysis Based on Limited Mem...IJECEIAES
Given a video of 푀 frames of size ℎ × 푤. Background components of a video are the elements matrix which relative constant over 푀 frames. In PCA (principal component analysis) method these elements are referred as “principal components”. In video processing, background subtraction means excision of background component from the video. PCA method is used to get the background component. This method transforms 3 dimensions video (ℎ × 푤 × 푀) into 2 dimensions one (푁 × 푀), where 푁 is a linear array of size ℎ × 푤 . The principal components are the dominant eigenvectors which are the basis of an eigenspace. The limited memory block Krylov subspace optimization then is proposed to improve performance the computation. Background estimation is obtained as the projection each input image (the first frame at each sequence image) onto space expanded principal component. The procedure was run for the standard dataset namely SBI (Scene Background Initialization) dataset consisting of 8 videos with interval resolution [146 150, 352 240], total frame [258,500]. The performances are shown with 8 metrics, especially (in average for 8 videos) percentage of error pixels (0.24%), the percentage of clustered error pixels (0.21%), multiscale structural similarity index (0.88 form maximum 1), and running time (61.68 seconds).
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
GAUSSIAN KERNEL BASED FUZZY C-MEANS CLUSTERING ALGORITHM FOR IMAGE SEGMENTATIONcscpconf
Image processing is an important research area in computer vision. clustering is an unsupervised study. clustering can also be used for image segmentation. there exist so many methods for image segmentation. image segmentation plays an important role in image analysis.it is one of the first and the most important tasks in image analysis and computer vision. this proposed system presents a variation of fuzzy c-means algorithm that provides image clustering. the kernel fuzzy
c-means clustering algorithm (kfcm) is derived from the fuzzy c-means clustering algorithm(fcm).the kfcm algorithm that provides image clustering and improves accuracy significantly compared with classical fuzzy c-means algorithm. the new algorithm is called gaussian kernel based fuzzy c-means clustering algorithm (gkfcm)the major characteristic of gkfcm is the use of a fuzzy clustering approach ,aiming to guarantee noise insensitiveness and image detail preservation.. the objective of the work is to cluster the low intensity in homogeneity area from the noisy images, using the clustering method, segmenting that portion separately using content level set approach. the purpose of designing this system is to produce better segmentation results for images corrupted by noise, so that it can be useful in various fields like medical image analysis, such as tumor detection, study of anatomical structure, and treatment planning.
GAUSSIAN KERNEL BASED FUZZY C-MEANS CLUSTERING ALGORITHM FOR IMAGE SEGMENTATIONcsandit
Image processing is an important research area in computer vision. clustering is an unsupervised
study. clustering can also be used for image segmentation. there exist so many methods for image
segmentation. image segmentation plays an important role in image analysis.it is one of the first
and the most important tasks in image analysis and computer vision. this proposed system
presents a variation of fuzzy c-means algorithm that provides image clustering. the kernel fuzzy
c-means clustering algorithm (kfcm) is derived from the fuzzy c-means clustering
algorithm(fcm).the kfcm algorithm that provides image clustering and improves accuracy
significantly compared with classical fuzzy c-means algorithm. the new algorithm is called
gaussian kernel based fuzzy c-means clustering algorithm (gkfcm)the major characteristic of
gkfcm is the use of a fuzzy clustering approach ,aiming to guarantee noise insensitiveness and
image detail preservation.. the objective of the work is to cluster the low intensity in homogeneity
area from the noisy images, using the clustering method, segmenting that portion separately using
content level set approach. the purpose of designing this system is to produce better segmentation
results for images corrupted by noise, so that it can be useful in various fields like medical image
analysis, such as tumor detection, study of anatomical structure, and treatment planning.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Survey on clustering based color image segmentation and novel approaches to f...eSAT Journals
Abstract Segmentation is an important image processing technique that helps to analyze an image automatically. Applications involving detection or recognition of objects in images often include segmentation process. This paper describes two unsupervised clustering based color image segmentation techniques namely K-means clustering and Fuzzy C-means (FCM) clustering. The advantages and disadvantages of both K-means and Fuzzy C-means algorithm are also presented in this paper. K-means algorithm takes less computation time as compared to Fuzzy C-means algorithm which produces result close to that of K-means. On the other hand in FCM algorithm each pixel of an image can have membership to more than one cluster which is not in case of K-means algorithm, an advantage to FCM method. Color images contain wide variety of information and are more complicated than gray scale images. In image processing, though color image segmentation is a challenging task but provides a path for image analysis in practical application fields. Secondly some novel approaches to FCM algorithm for better image segmentation are also discussed such as SFCM (Spatial FCM) and THFCM (Thresholding FCM). Basic FCM algorithm does not take into consideration the spatial information of the image. SFCM specially focus on spatial details and contribute towards image segmentation results for image analysis. It introduces spatial function into FCM algorithm membership function and then operates with available spatial information. THFCM is another approach that focus on thresholding technique for image segmentation. It main task is to find a discerner cluster that will act as automatic threshold. These two approaches shows how better segmentation results can be obtained.
Threshold adaptation and XOR accumulation algorithm for objects detectionIJECEIAES
Object detection, tracking and video analysis are vital and energetic tasks for intelligent video surveillance systems and computer vision applications. Object detection based on background modelling is a major technique used in dynamically objects extraction over video streams. This paper presents the threshold adaptation and XOR accumulation (TAXA) algorithm in three systematic stages throughout video sequences. First, the continuous calculation, updating and elimination of noisy background details with hybrid statistical techniques. Second, thresholds are calculated with an effective mean and Gaussian for the detection of the pixels of the objects. The third is a novel step in making decisions by using XOR-accumulation to extract pixels of the objects from the thresholds accurately. Each stage was presented with practical representations and theoretical explanations. On high resolution video which has difficult scenes and lighting conditions, the proposed algorithm was used and tested. As a result, with a precision average of 0.90% memory uses of 6.56% and the use of CPU 20% as well as time performance, the result excellent overall superior to all the major used foreground object extraction algorithms. As a conclusion, in comparison to other popular OpenCV methods the proposed TAXA algorithm has excellent detection ability.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
International Journal of Engineering and Science Invention (IJESI)inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
07 18sep 7983 10108-1-ed an edge edit ariIAESIJEECS
Edge exposure or edge detection is an important and classical study of the medical field and computer vision. Caliber Fuzzy C-means (CFCM) clustering Algorithm for edge detection depends on the selection of initial cluster center value. This endeavor to put in order a collection of pixels into a cluster, such that a pixel within the cluster must be more comparable to every other pixel. Using CFCM techniques first cluster the BSDS image, next the clustered image is given as an input to the basic canny edge detection algorithm. The application of new parameters with fewer operations for CFCM is fruitful. According to the calculation, a result acquired by using CFCM clustering function divides the image into four clusters in common. The proposed method is evidently robust into the modification of fuzzy c-means and canny algorithm. The convergence of this algorithm is very speedy compare to the entire edge detection algorithms. The consequences of this proposed algorithm make enhanced edge detection and better result than any other traditional image edge detection techniques.
A new block cipher for image encryption based on multi chaotic systemsTELKOMNIKA JOURNAL
In this paper, a new algorithm for image encryption is proposed based on three chaotic systems which are Chen system,logistic map and two-dimensional (2D) Arnold cat map. First, a permutation scheme is applied to the image, and then shuffled image is partitioned into blocks of pixels. For each block, Chen system is employed for confusion and then logistic map is employed for generating subsititution-box (S-box) to substitute image blocks. The S-box is dynamic, where it is shuffled for each image block using permutation operation. Then, 2D Arnold cat map is used for providing diffusion, after that XORing the result using Chen system to obtain the encrypted image.The high security of proposed algorithm is experimented using histograms, unified average changing intensity (UACI), number of pixels change rate (NPCR), entropy, correlation and keyspace analyses.
MEDIAN BASED PARALLEL STEERING KERNEL REGRESSION FOR IMAGE RECONSTRUCTIONcscpconf
Image reconstruction is a process of obtaining the original image from corrupted data. Applications of image reconstruction include Computer Tomography, radar imaging, weather
forecasting etc. Recently steering kernel regression method has been applied for image reconstruction [1]. There are two major drawbacks in this technique. Firstly, it is computationally intensive. Secondly, output of the algorithm suffers form spurious edges (especially in case of denoising). We propose a modified version of Steering Kernel Regression called as Median Based Parallel Steering Kernel Regression Technique. In the proposed algorithm the first problem is overcome by implementing it in on GPUs and multi-cores. The second problem is addressed by a gradient based suppression in which median filter is used. Our algorithm gives better output than that of the Steering Kernel Regression. The results are
compared using Root Mean Square Error(RMSE). Our algorithm has also shown a speedup of 21x using GPUs and shown speedup of 6x using multi-cores.
MEDIAN BASED PARALLEL STEERING KERNEL REGRESSION FOR IMAGE RECONSTRUCTIONcsandit
Image reconstruction is a process of obtaining the original image from corrupted data.Applications of image reconstruction include Computer Tomography, radar imaging, weather forecasting etc. Recently steering kernel regression method has been applied for image reconstruction [1]. There are two major drawbacks in this technique. Firstly, it is computationally intensive. Secondly, output of the algorithm suffers form spurious edges(especially in case of denoising). We propose a modified version of Steering Kernel Regression called as Median Based Parallel Steering Kernel Regression Technique. In the proposed algorithm the first problem is overcome by implementing it in on GPUs and multi-cores. The second problem is addressed by a gradient based suppression in which median filter is used.Our algorithm gives better output than that of the Steering Kernel Regression. The results are compared using Root Mean Square Error(RMSE). Our algorithm has also shown a speedup of 21x using GPUs and shown speedup of 6x using multi-cores.
Median based parallel steering kernel regression for image reconstructioncsandit
Image reconstruction is a process of obtaining the original image from corrupted data.
Applications of image reconstruction include Computer Tomography, radar imaging, weather
forecasting etc. Recently steering kernel regression method has been applied for image
reconstruction [1]. There are two major drawbacks in this technique. Firstly, it is
computationally intensive. Secondly, output of the algorithm suffers form spurious edges
(especially in case of denoising). We propose a modified version of Steering Kernel Regression
called as Median Based Parallel Steering Kernel Regression Technique. In the proposed
algorithm the first problem is overcome by implementing it in on GPUs and multi-cores. The
second problem is addressed by a gradient based suppression in which median filter is used.
Our algorithm gives better output than that of the Steering Kernel Regression. The results are
compared using Root Mean Square Error(RMSE). Our algorithm has also shown a speedup of
21x using GPUs and shown speedup of 6x using multi-cores.
ANALYSIS OF LAND SURFACE DEFORMATION GRADIENT BY DINSAR cscpconf
The progressive development of Synthetic Aperture Radar (SAR) systems diversify the exploitation of the generated images by these systems in different applications of geoscience. Detection and monitoring surface deformations, procreated by various phenomena had benefited from this evolution and had been realized by interferometry (InSAR) and differential interferometry (DInSAR) techniques. Nevertheless, spatial and temporal decorrelations of the interferometric couples used, limit strongly the precision of analysis results by these techniques. In this context, we propose, in this work, a methodological approach of surface deformation detection and analysis by differential interferograms to show the limits of this technique according to noise quality and level. The detectability model is generated from the deformation signatures, by simulating a linear fault merged to the images couples of ERS1 / ERS2 sensors acquired in a region of the Algerian south.
4D AUTOMATIC LIP-READING FOR SPEAKER'S FACE IDENTIFCATIONcscpconf
A novel based a trajectory-guided, concatenating approach for synthesizing high-quality image real sample renders video is proposed . The lips reading automated is seeking for modeled the closest real image sample sequence preserve in the library under the data video to the HMM predicted trajectory. The object trajectory is modeled obtained by projecting the face patterns into an KDA feature space is estimated. The approach for speaker's face identification by using synthesise the identity surface of a subject face from a small sample of patterns which sparsely each the view sphere. An KDA algorithm use to the Lip-reading image is discrimination, after that work consisted of in the low dimensional for the fundamental lip features vector is reduced by using the 2D-DCT.The mouth of the set area dimensionality is ordered by a normally reduction base on the PCA to obtain the Eigen lips approach, their proposed approach by[33]. The subjective performance results of the cost function under the automatic lips reading modeled , which wasn’t illustrate the superior performance of the
method.
MOVING FROM WATERFALL TO AGILE PROCESS IN SOFTWARE ENGINEERING CAPSTONE PROJE...cscpconf
Universities offer software engineering capstone course to simulate a real world-working environment in which students can work in a team for a fixed period to deliver a quality product. The objective of the paper is to report on our experience in moving from Waterfall process to Agile process in conducting the software engineering capstone project. We present the capstone course designs for both Waterfall driven and Agile driven methodologies that highlight the structure, deliverables and assessment plans.To evaluate the improvement, we conducted a survey for two different sections taught by two different instructors to evaluate students’ experience in moving from traditional Waterfall model to Agile like process. Twentyeight students filled the survey. The survey consisted of eight multiple-choice questions and an open-ended question to collect feedback from students. The survey results show that students were able to attain hands one experience, which simulate a real world-working environment. The results also show that the Agile approach helped students to have overall better design and avoid mistakes they have made in the initial design completed in of the first phase of the capstone project. In addition, they were able to decide on their team capabilities, training needs and thus learn the required technologies earlier which is reflected on the final product quality
PROMOTING STUDENT ENGAGEMENT USING SOCIAL MEDIA TECHNOLOGIEScscpconf
Using social media in education provides learners with an informal way for communication. Informal communication tends to remove barriers and hence promotes student engagement. This paper presents our experience in using three different social media technologies in teaching software project management course. We conducted different surveys at the end of every semester to evaluate students’ satisfaction and engagement. Results show that using social media enhances students’ engagement and satisfaction. However, familiarity with the tool is an important factor for student satisfaction.
A SURVEY ON QUESTION ANSWERING SYSTEMS: THE ADVANCES OF FUZZY LOGICcscpconf
In real world computing environment with using a computer to answer questions has been a human dream since the beginning of the digital era, Question-answering systems are referred to as intelligent systems, that can be used to provide responses for the questions being asked by the user based on certain facts or rules stored in the knowledge base it can generate answers of questions asked in natural , and the first main idea of fuzzy logic was to working on the problem of computer understanding of natural language, so this survey paper provides an overview on what Question-Answering is and its system architecture and the possible relationship and
different with fuzzy logic, as well as the previous related research with respect to approaches that were followed. At the end, the survey provides an analytical discussion of the proposed QA models, along or combined with fuzzy logic and their main contributions and limitations.
DYNAMIC PHONE WARPING – A METHOD TO MEASURE THE DISTANCE BETWEEN PRONUNCIATIONS cscpconf
Human beings generate different speech waveforms while speaking the same word at different times. Also, different human beings have different accents and generate significantly varying speech waveforms for the same word. There is a need to measure the distances between various words which facilitate preparation of pronunciation dictionaries. A new algorithm called Dynamic Phone Warping (DPW) is presented in this paper. It uses dynamic programming technique for global alignment and shortest distance measurements. The DPW algorithm can be used to enhance the pronunciation dictionaries of the well-known languages like English or to build pronunciation dictionaries to the less known sparse languages. The precision measurement experiments show 88.9% accuracy.
INTELLIGENT ELECTRONIC ASSESSMENT FOR SUBJECTIVE EXAMS cscpconf
In education, the use of electronic (E) examination systems is not a novel idea, as Eexamination systems have been used to conduct objective assessments for the last few years. This research deals with randomly designed E-examinations and proposes an E-assessment system that can be used for subjective questions. This system assesses answers to subjective questions by finding a matching ratio for the keywords in instructor and student answers. The matching ratio is achieved based on semantic and document similarity. The assessment system is composed of four modules: preprocessing, keyword expansion, matching, and grading. A survey and case study were used in the research design to validate the proposed system. The examination assessment system will help instructors to save time, costs, and resources, while increasing efficiency and improving the productivity of exam setting and assessments.
TWO DISCRETE BINARY VERSIONS OF AFRICAN BUFFALO OPTIMIZATION METAHEURISTICcscpconf
African Buffalo Optimization (ABO) is one of the most recent swarms intelligence based metaheuristics. ABO algorithm is inspired by the buffalo’s behavior and lifestyle. Unfortunately, the standard ABO algorithm is proposed only for continuous optimization problems. In this paper, the authors propose two discrete binary ABO algorithms to deal with binary optimization problems. In the first version (called SBABO) they use the sigmoid function and probability model to generate binary solutions. In the second version (called LBABO) they use some logical operator to operate the binary solutions. Computational results on two knapsack problems (KP and MKP) instances show the effectiveness of the proposed algorithm and their ability to achieve good and promising solutions.
DETECTION OF ALGORITHMICALLY GENERATED MALICIOUS DOMAINcscpconf
In recent years, many malware writers have relied on Dynamic Domain Name Services (DDNS) to maintain their Command and Control (C&C) network infrastructure to ensure a persistence presence on a compromised host. Amongst the various DDNS techniques, Domain Generation Algorithm (DGA) is often perceived as the most difficult to detect using traditional methods. This paper presents an approach for detecting DGA using frequency analysis of the character distribution and the weighted scores of the domain names. The approach’s feasibility is demonstrated using a range of legitimate domains and a number of malicious algorithmicallygenerated domain names. Findings from this study show that domain names made up of English characters “a-z” achieving a weighted score of < 45 are often associated with DGA. When a weighted score of < 45 is applied to the Alexa one million list of domain names, only 15% of the domain names were treated as non-human generated.
GLOBAL MUSIC ASSET ASSURANCE DIGITAL CURRENCY: A DRM SOLUTION FOR STREAMING C...cscpconf
The amount of piracy in the streaming digital content in general and the music industry in specific is posing a real challenge to digital content owners. This paper presents a DRM solution to monetizing, tracking and controlling online streaming content cross platforms for IP enabled devices. The paper benefits from the current advances in Blockchain and cryptocurrencies. Specifically, the paper presents a Global Music Asset Assurance (GoMAA) digital currency and presents the iMediaStreams Blockchain to enable the secure dissemination and tracking of the streamed content. The proposed solution provides the data owner the ability to control the flow of information even after it has been released by creating a secure, selfinstalled, cross platform reader located on the digital content file header. The proposed system provides the content owners’ options to manage their digital information (audio, video, speech, etc.), including the tracking of the most consumed segments, once it is release. The system benefits from token distribution between the content owner (Music Bands), the content distributer (Online Radio Stations) and the content consumer(Fans) on the system blockchain.
IMPORTANCE OF VERB SUFFIX MAPPING IN DISCOURSE TRANSLATION SYSTEMcscpconf
This paper discusses the importance of verb suffix mapping in Discourse translation system. In
discourse translation, the crucial step is Anaphora resolution and generation. In Anaphora
resolution, cohesion links like pronouns are identified between portions of text. These binders
make the text cohesive by referring to nouns appearing in the previous sentences or nouns
appearing in sentences after them. In Machine Translation systems, to convert the source
language sentences into meaningful target language sentences the verb suffixes should be
changed as per the cohesion links identified. This step of translation process is emphasized in
the present paper. Specifically, the discussion is on how the verbs change according to the
subjects and anaphors. To explain the concept, English is used as the source language (SL) and
an Indian language Telugu is used as Target language (TL)
EXACT SOLUTIONS OF A FAMILY OF HIGHER-DIMENSIONAL SPACE-TIME FRACTIONAL KDV-T...cscpconf
In this paper, based on the definition of conformable fractional derivative, the functional
variable method (FVM) is proposed to seek the exact traveling wave solutions of two higherdimensional
space-time fractional KdV-type equations in mathematical physics, namely the
(3+1)-dimensional space–time fractional Zakharov-Kuznetsov (ZK) equation and the (2+1)-
dimensional space–time fractional Generalized Zakharov-Kuznetsov-Benjamin-Bona-Mahony
(GZK-BBM) equation. Some new solutions are procured and depicted. These solutions, which
contain kink-shaped, singular kink, bell-shaped soliton, singular soliton and periodic wave
solutions, have many potential applications in mathematical physics and engineering. The
simplicity and reliability of the proposed method is verified.
AUTOMATED PENETRATION TESTING: AN OVERVIEWcscpconf
The using of information technology resources is rapidly increasing in organizations,
businesses, and even governments, that led to arise various attacks, and vulnerabilities in the
field. All resources make it a must to do frequently a penetration test (PT) for the environment
and see what can the attacker gain and what is the current environment's vulnerabilities. This
paper reviews some of the automated penetration testing techniques and presents its
enhancement over the traditional manual approaches. To the best of our knowledge, it is the
first research that takes into consideration the concept of penetration testing and the standards
in the area.This research tackles the comparison between the manual and automated
penetration testing, the main tools used in penetration testing. Additionally, compares between
some methodologies used to build an automated penetration testing platform.
CLASSIFICATION OF ALZHEIMER USING fMRI DATA AND BRAIN NETWORKcscpconf
Since the mid of 1990s, functional connectivity study using fMRI (fcMRI) has drawn increasing
attention of neuroscientists and computer scientists, since it opens a new window to explore
functional network of human brain with relatively high resolution. BOLD technique provides
almost accurate state of brain. Past researches prove that neuro diseases damage the brain
network interaction, protein- protein interaction and gene-gene interaction. A number of
neurological research paper also analyse the relationship among damaged part. By
computational method especially machine learning technique we can show such classifications.
In this paper we used OASIS fMRI dataset affected with Alzheimer’s disease and normal
patient’s dataset. After proper processing the fMRI data we use the processed data to form
classifier models using SVM (Support Vector Machine), KNN (K- nearest neighbour) & Naïve
Bayes. We also compare the accuracy of our proposed method with existing methods. In future,
we will other combinations of methods for better accuracy.
VALIDATION METHOD OF FUZZY ASSOCIATION RULES BASED ON FUZZY FORMAL CONCEPT AN...cscpconf
In order to treat and analyze real datasets, fuzzy association rules have been proposed. Several
algorithms have been introduced to extract these rules. However, these algorithms suffer from
the problems of utility, redundancy and large number of extracted fuzzy association rules. The
expert will then be confronted with this huge amount of fuzzy association rules. The task of
validation becomes fastidious. In order to solve these problems, we propose a new validation
method. Our method is based on three steps. (i) We extract a generic base of non redundant
fuzzy association rules by applying EFAR-PN algorithm based on fuzzy formal concept analysis.
(ii) we categorize extracted rules into groups and (iii) we evaluate the relevance of these rules
using structural equation model.
PROBABILITY BASED CLUSTER EXPANSION OVERSAMPLING TECHNIQUE FOR IMBALANCED DATAcscpconf
In many applications of data mining, class imbalance is noticed when examples in one class are
overrepresented. Traditional classifiers result in poor accuracy of the minority class due to the
class imbalance. Further, the presence of within class imbalance where classes are composed of
multiple sub-concepts with different number of examples also affect the performance of
classifier. In this paper, we propose an oversampling technique that handles between class and
within class imbalance simultaneously and also takes into consideration the generalization
ability in data space. The proposed method is based on two steps- performing Model Based
Clustering with respect to classes to identify the sub-concepts; and then computing the
separating hyperplane based on equal posterior probability between the classes. The proposed
method is tested on 10 publicly available data sets and the result shows that the proposed
method is statistically superior to other existing oversampling methods.
CHARACTER AND IMAGE RECOGNITION FOR DATA CATALOGING IN ECOLOGICAL RESEARCHcscpconf
Data collection is an essential, but manpower intensive procedure in ecological research. An
algorithm was developed by the author which incorporated two important computer vision
techniques to automate data cataloging for butterfly measurements. Optical Character
Recognition is used for character recognition and Contour Detection is used for imageprocessing.
Proper pre-processing is first done on the images to improve accuracy. Although
there are limitations to Tesseract’s detection of certain fonts, overall, it can successfully identify
words of basic fonts. Contour detection is an advanced technique that can be utilized to
measure an image. Shapes and mathematical calculations are crucial in determining the precise
location of the points on which to draw the body and forewing lines of the butterfly. Overall,
92% accuracy were achieved by the program for the set of butterflies measured.
SOCIAL MEDIA ANALYTICS FOR SENTIMENT ANALYSIS AND EVENT DETECTION IN SMART CI...cscpconf
Smart cities utilize Internet of Things (IoT) devices and sensors to enhance the quality of the city
services including energy, transportation, health, and much more. They generate massive
volumes of structured and unstructured data on a daily basis. Also, social networks, such as
Twitter, Facebook, and Google+, are becoming a new source of real-time information in smart
cities. Social network users are acting as social sensors. These datasets so large and complex
are difficult to manage with conventional data management tools and methods. To become
valuable, this massive amount of data, known as 'big data,' needs to be processed and
comprehended to hold the promise of supporting a broad range of urban and smart cities
functions, including among others transportation, water, and energy consumption, pollution
surveillance, and smart city governance. In this work, we investigate how social media analytics
help to analyze smart city data collected from various social media sources, such as Twitter and
Facebook, to detect various events taking place in a smart city and identify the importance of
events and concerns of citizens regarding some events. A case scenario analyses the opinions of
users concerning the traffic in three largest cities in the UAE
SOCIAL NETWORK HATE SPEECH DETECTION FOR AMHARIC LANGUAGEcscpconf
The anonymity of social networks makes it attractive for hate speech to mask their criminal
activities online posing a challenge to the world and in particular Ethiopia. With this everincreasing
volume of social media data, hate speech identification becomes a challenge in
aggravating conflict between citizens of nations. The high rate of production, has become
difficult to collect, store and analyze such big data using traditional detection methods. This
paper proposed the application of apache spark in hate speech detection to reduce the
challenges. Authors developed an apache spark based model to classify Amharic Facebook
posts and comments into hate and not hate. Authors employed Random forest and Naïve Bayes
for learning and Word2Vec and TF-IDF for feature selection. Tested by 10-fold crossvalidation,
the model based on word2vec embedding performed best with 79.83%accuracy. The
proposed method achieve a promising result with unique feature of spark for big data.
GENERAL REGRESSION NEURAL NETWORK BASED POS TAGGING FOR NEPALI TEXTcscpconf
This article presents Part of Speech tagging for Nepali text using General Regression Neural
Network (GRNN). The corpus is divided into two parts viz. training and testing. The network is
trained and validated on both training and testing data. It is observed that 96.13% words are
correctly being tagged on training set whereas 74.38% words are tagged correctly on testing
data set using GRNN. The result is compared with the traditional Viterbi algorithm based on
Hidden Markov Model. Viterbi algorithm yields 97.2% and 40% classification accuracies on
training and testing data sets respectively. GRNN based POS Tagger is more consistent than the
traditional Viterbi decoding technique.
The Art Pastor's Guide to Sabbath | Steve ThomasonSteve Thomason
What is the purpose of the Sabbath Law in the Torah. It is interesting to compare how the context of the law shifts from Exodus to Deuteronomy. Who gets to rest, and why?
Operation “Blue Star” is the only event in the history of Independent India where the state went into war with its own people. Even after about 40 years it is not clear if it was culmination of states anger over people of the region, a political game of power or start of dictatorial chapter in the democratic setup.
The people of Punjab felt alienated from main stream due to denial of their just demands during a long democratic struggle since independence. As it happen all over the word, it led to militant struggle with great loss of lives of military, police and civilian personnel. Killing of Indira Gandhi and massacre of innocent Sikhs in Delhi and other India cities was also associated with this movement.
How to Split Bills in the Odoo 17 POS ModuleCeline George
Bills have a main role in point of sale procedure. It will help to track sales, handling payments and giving receipts to customers. Bill splitting also has an important role in POS. For example, If some friends come together for dinner and if they want to divide the bill then it is possible by POS bill splitting. This slide will show how to split bills in odoo 17 POS.
Read| The latest issue of The Challenger is here! We are thrilled to announce that our school paper has qualified for the NATIONAL SCHOOLS PRESS CONFERENCE (NSPC) 2024. Thank you for your unwavering support and trust. Dive into the stories that made us stand out!
Synthetic Fiber Construction in lab .pptxPavel ( NSTU)
Synthetic fiber production is a fascinating and complex field that blends chemistry, engineering, and environmental science. By understanding these aspects, students can gain a comprehensive view of synthetic fiber production, its impact on society and the environment, and the potential for future innovations. Synthetic fibers play a crucial role in modern society, impacting various aspects of daily life, industry, and the environment. ynthetic fibers are integral to modern life, offering a range of benefits from cost-effectiveness and versatility to innovative applications and performance characteristics. While they pose environmental challenges, ongoing research and development aim to create more sustainable and eco-friendly alternatives. Understanding the importance of synthetic fibers helps in appreciating their role in the economy, industry, and daily life, while also emphasizing the need for sustainable practices and innovation.
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdfTechSoup
In this webinar you will learn how your organization can access TechSoup's wide variety of product discount and donation programs. From hardware to software, we'll give you a tour of the tools available to help your nonprofit with productivity, collaboration, financial management, donor tracking, security, and more.
How to Create Map Views in the Odoo 17 ERPCeline George
The map views are useful for providing a geographical representation of data. They allow users to visualize and analyze the data in a more intuitive manner.
How to Make a Field invisible in Odoo 17Celine George
It is possible to hide or invisible some fields in odoo. Commonly using “invisible” attribute in the field definition to invisible the fields. This slide will show how to make a field invisible in odoo 17.
2. 100 Computer Science & Information Technology (CS & IT)
For region-based technique, most of the early introduced techniques are based on the image
histogram. In 1979, Otsu [2] presented a technique that considered the image histogram as having
a two gaussian distribution representing the object and the background. A threshold is selected to
maximize the inter-class separation on the basis of the class Variances. For the edge-based
thresholding technique, the idea of applying the boundary based attributes is based on the fact that
discriminant features exist at the boundary between the object and the background [3]. Thus, the
edge-based thresholding technique has become more popular for exploration. Milgram [10]
applied edge information to segment images by proposing “super slice” method. In this method,
the edge information (gradient) is integrated with the recursive region splitting technique. The
superslice method was also applied and improved in [3, 11].
2. ADAPTIVE THRESHOLDING
We adapt this technique with some optimization. The following outlines Hoover’s adaptive
thresholding:
1) Binarize the image with a single threshold T
2) Thin the thresholded image
3) Erase all branchpoints in the thinned image
4) All remaining endpoints are placed in the probe
queue and are used as a starting point for tracking
5) Track the region with threshold T
6) If the region passed testing, T=T-1, go to 5)
The testing in step 6) is some constraints to guarantee the region is image segment.Very small
segments that remain after thresholding will be “size-filtered”. But this step is preceded by the
application of morphological filter to fill the small gaps between images.
3. Kernel fuzzy c-means clustering (KFCM):
Define a nonlinear map as ( ) Fxx ∈→•
• φφ ,where XXx .∈ denotes the data space and F is the
transformed feature space with higher even infinite dimensions. KFCM minimized the following
objective function:
)( ( ) ( ) 2
1`1
., ii
n
k
m
ik
c
i
m vxuVUJ φφ −≡ ∑∑ −−
…….. (2.2)
Where ( ) ( ) ( ) ( ) ( )ikiikkii vxKvvKxxKvx ,2,,
2
−+=− φφ ………………… (2.3)
Where ( ) ( ) ( )yxyxK
T
φφ=, is an inner product of the kernel function. If we adopt the
Gaussian function as a kernel function, ( ) ( ),2/exp, 22
σyxyxK −−= then ( ) .1, =xxK
according to Eq. (2.3), Eq. (2.2) can be rewritten as
)( ( )( ).,1.2,
1`1
ik
n
k
m
ik
c
i
m vxkuVUJ −≡ ∑∑ −−
………. (2.4)
Minimizing Eq. (2.4) under the constraint of, , 1iku m > . We have
3. Computer Science & Information Technology (CS & IT) 101
( )( )( ) ( )
( )( )( ) ( )
∑=
−
−
−
−
= c
j
m
ik
m
ik
ik
vxK
vxK
u
1
1/1
1/1
,1/1
,1/1
……………. (2.5)
( )
( )∑
∑
=
=
= n
k
ik
m
ik
n
k
kikik
i
vxKu
xvxKu
v
1
1
,
,
………………….. (2.6)
Here we now utilize the Gaussian kernel function for
Straightforwardness. If we use additional kernel functions, there will be corresponding
modifications in Eq. (2.5) and (2.6).
In fact, Eq.(2.3) can be analyzed as kernel-induced new metric in the data space, which is
defined as the following
( ) ( ) ( ) ( )( )yxKyxyxd ,12, −=−∆ φφ …….. (2.7)
And it can be proven that ( )yxd , is defined in Eq. (2.7) is a metric in the original space in case that
( )yxK , takes as the Gaussian kernel function. According to Eq. (6), the data point kx is capable
with an additional weight ( )ik vxK , , which measures the similarity between kx and iv and when
kx is an outlier i.e., kx is far from the other data points, then ( )ik vxK , will be very small, so the
weighted sum of data points shall be more strong.
The full explanation of KFCM algorithm is as follows:
KFCM Algorithm:
Step 1: Select initial class prototype{ } 1
c
i i
v =
.
Step 2: Update all memberships iku with Eq. (2.5).
Step 3: Obtain the prototype of clusters in the forms of weighted average with Eq. (2.6).
Step 4: Repeat step 2-3 till termination. The termination criterion is new oldV V ε− ≤ .
Where . is the Euclidean norm. V is the vector of cluster centers ε is a small number that
can be set by user (here ε =0.01).
4. EXPERIMENTAL RESULTS
The output of adaptive threshold algorithm and KFCM, figure 1 is as shown below. Firstly the
original image is as shown in figure (i) is transformed to a proposed adaptive threshold algorithm
with the test images of the brain and original binary image, note the over segmentation. The
approximate contour of white matter was got by adaptive threshold algorithm shown in Figure ii
4. 102 Computer Science & Information Technology (CS & IT)
of Figure 1. The output of adaptive threshold algorithm is given to KFCM clustering to get the
fuzzy image with fuzzy boundaries. The snooping of regions else appear as a result of the in
excess of segmentation.
5. DISCUSSIONS
In this paper, we projected a new method to transform the algorithm. The original image was
partitioned with adaptive threshold algorithm, KFCM and the controlled action of the edge
indicator function was increased. The result of adaptive threshold algorithm and KFCM
segmentation are as shown in Figure: 1.
Alternatively the adaptive threshold algorithm is sensitive to noise; some redundant boundaries
were appeared in the candidates.
6. CONCLUSIONS
In this paper, we proposed an adaptive threshold algorithm, KFCM with a level set method. The
results of this paper confirmed that the mixture of adaptive threshold, KFCM could be used for
the segmentation of low contrast images and medical images. The validity of new algorithm was
verified in the process of exacting details of images. In the future research, noise was added in
images prior information on the object boundary extraction with level set method, such as
boundary, shape, and size, would be further analyzed.
Figure i Figure ii Figure iii
Original Test Image Adaptive Threshold test image KFCM Segmented output
FIGURE: 1
Figure i are the original test images.
Figure ii are the results of Adaptive threshold images, to extracting the white matter.
Figure iii are the results of KFCM Segmented outputs.
REFERENCES
[1] A. Shio, “An Automatic Thresholding Algorithm Based On An Illumination-Independent Contrast
Measure”, IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp.
632- 637, 4-8 June 1989.
5. Computer Science & Information Technology (CS & IT) 103
[2] N. Otsu, “A Threshold Selection Method from Gray-Level Histogram”, IEEE Trans. on System Man
Cybernetics, SMC, vol. 9(I), pp. 62-66, 1979.
[3] F.H.Y. Chan, F.K. Lam, Hui Zhu, “Adaptive Thresholding By Variational Method”, IEEE
Transactions on Image Processing, vol. 7, no. 3, pp. 468-473, March 1998.
[4] Osher S, Scthian J.A,Fronts propagating with curvature dependent speed: algorthim’s based on the
Hamilton-Jacobi formulation.Journal of Computational Physics,1988,pp.12-49.
[5] Malladi,R,Sethain,J., and Vemuri,B., Shape modelling with front propagation: A level set
approach.IEEE Trans.Pattern Anal. Mach.Intell, 1995, pp.158-175.
[6] Staib,L., Zeng,X., Schultz,R., and Duncan,J., Shape constraints in deformablemodels.Handbook of
Medical Imaging,Bankman,I.,ed., 2000,pp.147-157.
[7] D. L. Milgram, “Region Extraction Using Convergent Evidence”, IEEE Trans. on Computer Graphics
and Image Processing, vol. 11 no. 1, 1979.
[8] S. D. Yanowitz and A. M. Bruckstein, “A New Method For Image Segmentation”, IEEE Trans. on
Computer. Vision, Graphic and Image Processing, vol. 46, pp. 82-95, 1989.
[9] J.C.Bezdek ,Pattern Recognition with Fuzzy Objective Function Algorthims,Plenum Press ,New
York,1981.
[10] K.L.Wu,M.S.Yang, Alternative c-means clustering algorthims,Pattern.
[11] Mclnerney T, Terzopouls D,Deformable models in medical image analysis: a survey.Medical
Analysis,1996,pp.91-108.
[12] S. D. Yanowitz and A. M. Bruckstein, “A New Method For Image Segmentation”, IEEE Trans. on
Computer. Vision, Graphic and Image Processing, vol. 46, pp. 82-95, 1989.
[13] Bezedek,J., A convergence theorem for the fuzzy ISODATA clustering algorthims. IEEE
Trans.Pattern Anal.Mach.Intell., 1980,pp 78-82.
[14] L. Zhang, W.D. Zhou, L.C. Jiao. Kernel clustering algorithm. Chinese J. Computers, vol25 (6), pp.
587-590, 2002 (in Chinese).
Authors
Tara. Saikumar, received his M.Tech, degree in Digital communication Engineering
from Kakatiya Institute of Technology and Science, Kakatiya University, Warangal.
He is the university topper in his M.Tech, KITS, Warangal and also received gold
medal from Dr.D.N.Reddy vice-chancellor JNTUH in his B.Tech for his
excellence2004-2008. Presently working as Asst Professor in ECE, CMR Technical
Education Society, Group of Institutions, Hyderabad. He his 34 paper in his credit,
International journal, International and National Conferences. He is a life member of
IAENG, IACSIT, and UAEEE.