A novel method is proposed for image segmentation based on probabilistic field theory. This model assumes that the whole pixels of an image and some unknown parameters form a field. According to this model, the pixel labels are generated by a compound function of the field. The main novelty of this model is it consider the features of the pixels and the interdependent among the pixels. The parameters are generated by a novel spatially variant mixture model and estimated by expectation-maximization (EM)-
based algorithm. Thus, we simultaneously impose the spatial smoothness on the prior knowledge. Numerical experiments are presented where the proposed method and other mixture model-based methods were tested on synthetic and real world images. These experimental results demonstrate that our algorithm achieves competitive performance compared to other methods.
Q UANTUM C LUSTERING -B ASED F EATURE SUBSET S ELECTION FOR MAMMOGRAPHIC I...ijcsit
In this paper, we present an algorithm for feature selection. This algorithm labeled QC-FS: Quantum
Clustering for Feature Selection performs the selection in two steps. Partitioning the original features
space in order to group similar features is performed using the Quantum Clustering algorithm. Then the
selection of a representative for each cluster is carried out. It uses similarity measures such as correlation
coefficient (CC) and the mutual information (MI). The feature which maximizes this information is chosen
by the algorithm
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology
A Review on Classification Based Approaches for STEGanalysis DetectionEditor IJCATR
This document summarizes two approaches for image steganalysis detection. The first approach proposes novel steganalysis algorithms based on how data hiding affects the rate-distortion characteristics of images. Features are extracted based on increased image entropy and small, imperceptible distortions from data embedding. A Bayesian classifier is then trained on these features. The second approach uses contourlet transform to represent images. It extracts features based on the first four normalized statistical moments of high and low frequency subbands and structural similarity measure of medium frequency subbands. A non-linear support vector machine is then used for classification. Experimental results show the proposed approaches can efficiently detect stego images with high accuracy and low computational cost.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
IRJET-Multimodal Image Classification through Band and K-Means ClusteringIRJET Journal
This document proposes a bilayer graph-based learning framework for multimodal image classification using limited labeled pixels. It constructs a simple graph in the first layer where each vertex is a pixel and edge weights encode pixel similarity. Unsupervised learning estimates grouping relations among pixels. These relations form a hypergraph in the second layer, on which semisupervised learning classifies pixels to address challenges of complex relationships and limited labels in multimodal images. The framework effectively exploits the underlying data structure.
SEGMENTATION USING ‘NEW’ TEXTURE FEATUREacijjournal
This document summarizes a research paper that proposes a new texture feature descriptor called "NEW" for image segmentation. The NEW descriptor labels neighboring pixels and forms eight-component binary vectors to represent texture. Fuzzy c-means clustering is then used to segment images into regions based on texture. Experimental results on texture images from the Brodatz dataset show the NEW descriptor can successfully segment images into the correct number of texture regions. Accuracy, precision, and recall metrics are used to evaluate the segmentation performance.
This document discusses k-means clustering for image segmentation. It begins with an abstract describing a color-based image segmentation method using k-means clustering to partition pixels into homogeneous clusters. It then provides background on image segmentation and k-means clustering. The document outlines the k-means clustering algorithm and applies it to segment an example image ("rotapple.jpg") into three clusters corresponding to different image regions. It concludes that k-means clustering provides an effective approach for basic image segmentation.
Q UANTUM C LUSTERING -B ASED F EATURE SUBSET S ELECTION FOR MAMMOGRAPHIC I...ijcsit
In this paper, we present an algorithm for feature selection. This algorithm labeled QC-FS: Quantum
Clustering for Feature Selection performs the selection in two steps. Partitioning the original features
space in order to group similar features is performed using the Quantum Clustering algorithm. Then the
selection of a representative for each cluster is carried out. It uses similarity measures such as correlation
coefficient (CC) and the mutual information (MI). The feature which maximizes this information is chosen
by the algorithm
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology
A Review on Classification Based Approaches for STEGanalysis DetectionEditor IJCATR
This document summarizes two approaches for image steganalysis detection. The first approach proposes novel steganalysis algorithms based on how data hiding affects the rate-distortion characteristics of images. Features are extracted based on increased image entropy and small, imperceptible distortions from data embedding. A Bayesian classifier is then trained on these features. The second approach uses contourlet transform to represent images. It extracts features based on the first four normalized statistical moments of high and low frequency subbands and structural similarity measure of medium frequency subbands. A non-linear support vector machine is then used for classification. Experimental results show the proposed approaches can efficiently detect stego images with high accuracy and low computational cost.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
IRJET-Multimodal Image Classification through Band and K-Means ClusteringIRJET Journal
This document proposes a bilayer graph-based learning framework for multimodal image classification using limited labeled pixels. It constructs a simple graph in the first layer where each vertex is a pixel and edge weights encode pixel similarity. Unsupervised learning estimates grouping relations among pixels. These relations form a hypergraph in the second layer, on which semisupervised learning classifies pixels to address challenges of complex relationships and limited labels in multimodal images. The framework effectively exploits the underlying data structure.
SEGMENTATION USING ‘NEW’ TEXTURE FEATUREacijjournal
This document summarizes a research paper that proposes a new texture feature descriptor called "NEW" for image segmentation. The NEW descriptor labels neighboring pixels and forms eight-component binary vectors to represent texture. Fuzzy c-means clustering is then used to segment images into regions based on texture. Experimental results on texture images from the Brodatz dataset show the NEW descriptor can successfully segment images into the correct number of texture regions. Accuracy, precision, and recall metrics are used to evaluate the segmentation performance.
This document discusses k-means clustering for image segmentation. It begins with an abstract describing a color-based image segmentation method using k-means clustering to partition pixels into homogeneous clusters. It then provides background on image segmentation and k-means clustering. The document outlines the k-means clustering algorithm and applies it to segment an example image ("rotapple.jpg") into three clusters corresponding to different image regions. It concludes that k-means clustering provides an effective approach for basic image segmentation.
PERFORMANCE ANALYSIS OF CLUSTERING BASED IMAGE SEGMENTATION AND OPTIMIZATION ...cscpconf
Partitioning of an image into several constituent components is called image segmentation.
Myriad algorithms using different methods have been proposed for image segmentation. Many
clustering algorithms and optimization techniques are also being used for segmentation of
images. A major challenge in segmentation evaluation comes from the fundamental conflict
between generality and objectivity. As there is a glut of image segmentation techniques
available today, customer who is the real user of these techniques may get obfuscated. In this
paper to address the above described problem some image segmentation techniques are evaluated based on their consistency in different applications. Based on the parameters used quantification of different clustering algorithms is done.
This document describes an interactive multi-label image segmentation algorithm called "GrowCut" based on cellular automata. The algorithm can segment N-dimensional images with multiple labels. With modest user input of labeled pixels, GrowCut automatically segments the rest of the image in an iterative process. It requires less user effort than other techniques for moderately difficult images. The algorithm has advantages such as efficiency, parallelizability, and extensibility to generate new segmentation methods.
There exists a plethora of algorithms to perform image segmentation and there are several issues related to
execution time of these algorithms. Image Segmentation is nothing but label relabeling problem under
probability framework. To estimate the label configuration, an iterative optimization scheme is
implemented to alternately carry out the maximum a posteriori (MAP) estimation and the maximum
likelihood (ML) estimations. In this paper this technique is modified in such a way so that it performs
segmentation within stipulated time period. The extensive experiments shows that the results obtained are
comparable with existing algorithms. This algorithm performs faster execution than the existing algorithm
to give automatic segmentation without any human intervention. Its result match image edges very closer to
human perception.
ER Publication,
IJETR, IJMCTR,
Journals,
International Journals,
High Impact Journals,
Monthly Journal,
Good quality Journals,
Research,
Research Papers,
Research Article,
Free Journals, Open access Journals,
erpublication.org,
Engineering Journal,
Science Journals,
Image Segmentation Using Two Weighted Variable Fuzzy K MeansEditor IJCATR
Image segmentation is the first step in image analysis and pattern recognition. Image segmentation is the process of dividing an image into different regions such that each region is homogeneous. The accurate and effective algorithm for segmenting image is very useful in many fields, especially in medical image. This paper presents a new approach for image segmentation by applying k-means algorithm with two level variable weighting. In image segmentation, clustering algorithms are very popular as they are intuitive and are also easy to implement. The K-means and Fuzzy k-means clustering algorithm is one of the most widely used algorithms in the literature, and many authors successfully compare their new proposal with the results achieved by the k-Means and Fuzzy k-Means. This paper proposes a new clustering algorithm called TW-fuzzy k-means, an automated two-level variable weighting clustering algorithm for segmenting object. In this algorithm, a variable weight is also assigned to each variable on the current partition of data. This could be applied on general images and/or specific images (i.e., medical and microscopic images). The proposed TW-Fuzzy k-means algorithm in terms of providing a better segmentation performance for various type of images. Based on the results obtained, the proposed algorithm gives better visual quality as compared to several other clustering methods.
A Blind Steganalysis on JPEG Gray Level Image Based on Statistical Features a...IJERD Editor
This paper presents a blind steganalysis technique to effectively attack the JPEG steganographic
schemes i.e. Jsteg, F5, Outguess and DWT Based. The proposed method exploits the correlations between
block-DCTcoefficients from intra-block and inter-block relation and the statistical moments of characteristic
functions of the test image is selected as features. The features are extracted from the BDCT JPEG 2-array.
Support Vector Machine with cross-validation is implemented for the classification.The proposed scheme gives
improved outcome in attacking.
The document describes a new algorithm for long-term robust visual tracking of moving objects in video sequences. The algorithm aims to overcome challenges such as geometric deformations, partial or total occlusions, and recovery after the target leaves the field of vision. It does not rely on a probabilistic process or require prior detection data. Experimental results on difficult video sequences demonstrate advantages over recent trackers. The algorithm can be used in applications like video surveillance, active vision, and industrial visual servoing.
Engineering Research Publication
Best International Journals, High Impact Journals,
International Journal of Engineering & Technical Research
ISSN : 2321-0869 (O) 2454-4698 (P)
www.erpublication.org
The document presents a new method for segmenting MR brain images that combines a hidden Markov random field (HMRF) model with a hybrid metaheuristic optimization algorithm. The HMRF model uses adaptive parameters to balance contributions from different tissue classes during segmentation. The hybrid metaheuristic algorithm improves the quality of solutions during HMRF optimization by combining the cuckoo search and particle swarm optimization algorithms. Experimental results on simulated and real MR brain images show the proposed method achieves satisfactory segmentation performance for images with noise and intensity inhomogeneity.
Eugen Zaharescu-PROJECT STATEMENT-Morphological Medical Image Indexing and Cl...Eugen Zaharescu
This document outlines PhD. Assoc. Professor Eugen ZAHARESCU's project on developing an advanced mathematical model for analyzing and processing medical images using mathematical morphology methods. The project aims to 1) implement modern mathematical methods for solving problems in medical imaging, 2) collect, index and classify medical images and store them in a digital library, and 3) develop a software system for medical imaging using a formal model based on mathematical morphology. ZAHARESCU has done research on extending mathematical morphology to logarithmic image processing and developing new morphological operators for medical image enhancement.
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
This document presents a survey of contemporary research on image segmentation through clustering techniques. It discusses various clustering approaches including exclusive clustering (e.g. k-means), overlapping clustering (e.g. fuzzy c-means), hierarchical clustering, and probabilistic D-clustering. It provides details on the algorithms and steps involved in each technique. The paper analyzes different clustering methods for image segmentation and concludes that fuzzy c-means is superior but has high computational costs, while probabilistic D-clustering can avoid this issue.
Comparative analysis and implementation of structured edge active contour IJECEIAES
This paper proposes modified chanvese model which can be implemented on image for segmentation. The structure of paper is based on Linear structure tensor (LST) as input to the variant model. Structure tensor is a matrix illustration of partial derivative information. In the proposed model, the original image is considered as information channel for computing structure tensor. Difference of Gaussian (DOG) is featuring improvement in which we can get less blurred image than original image. In this paper LST is modified by adding intensity information to enhance orientation information. Finally Active Contour Model (ACM) is used to segment the images. The proposed algorithm is tested on various images and also on some images which have intensity inhomogeneity and results are shown. Also, the results with other algorithms like chanvese, Bhattacharya, Gabor based chanvese and Novel structure tensor based model are compared. It is verified that accuracy of proposed model is the best. The biggest advantage of proposed model is clear edge enhancement.
A combined method of fractal and glcm features for mri and ct scan images cla...sipij
Fractal analysis has been shown to be useful in image processing for characterizing shape and gray-scale
complexity. The fractal feature is a compact descriptor used to give a numerical measure of the degree of
irregularity of the medical images. This descriptor property does not give ownership of the local image
structure. In this paper, we present a combination of this parameter based on Box Counting with GLCM
Features. This powerful combination has proved good results especially in classification of medical texture
from MRI and CT Scan images of trabecular bone. This method has the potential to improve clinical
diagnostics tests for osteoporosis pathologies.
CELL TRACKING QUALITY COMPARISON BETWEEN ACTIVE SHAPE MODEL (ASM) AND ACTIVE ...ijitcs
The aim of this paper is to introduce a comparison between cell tracking using active shape model (ASM)
and active appearance model (AAM) algorithms, to compare the cells tracking quality between the two
methods to track the mobility of the living cells. Where sensitive and accurate cell tracking system is
essential to cell motility studies. The active shape model (ASM) and active appearance model (AAM)
algorithms has proved to be a successful methods for matching statistical models. The experimental results
indicate the ability of (AAM) meth
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
REMOVING OCCLUSION IN IMAGES USING SPARSE PROCESSING AND TEXTURE SYNTHESISIJCSEA Journal
The document presents a method for removing large occlusions from images using sparse processing and texture synthesis. It involves decomposing the image into structure and texture images using sparse representations. The occluded regions in the structure image are filled in using sparse reconstruction, which retains image structures. Texture synthesis is then performed on the texture image to fill in the occluded texture. Finally, the reconstructed structure and texture images are combined to produce the occlusion-free output image. The method is shown to effectively remove large occlusions while avoiding blurring and retaining both structures and textures. It outperforms other inpainting methods in terms of visual quality.
This paper proposes a new fuzzy similarity measure called Fuzzy Monotonic Inclusion (FMI) to measure similarity between images for image retrieval systems. The FMI approach segments images into regions, extracts features for each region, and maps the features into a fuzzy similarity model based on fuzzy inclusion. Experimental results on the Label Me image dataset show the FMI approach achieves higher precision than other methods like Unified Feature Matching and Fuzzy Histogram in identifying images by semantic class.
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Increase Efficiency with Enhanced Payroll ServicesInsideUp
This document provides information on selecting a payroll services vendor. It discusses the benefits of outsourcing payroll processing and highlights important criteria for choosing a vendor, such as reliability, communication, cost, and expertise. Several top payroll services companies are listed and described, including ADP, Paychex, Paylocity, and SurePayroll. The document emphasizes performing thorough reference checks and considering unbiased sources when evaluating potential vendors.
Ten Types of Business Financing You May Not Have TriedInsideUp
This document discusses 10 types of business financing options including factoring, merchant cash advances, lines of equity, equipment loans and leases, commercial mortgages, microloans, SBA loans, franchise financing, SBA 504 loans, and the loan application process. It provides brief descriptions of each financing type and notes some of their key terms and requirements. The overall document serves as a guide to help business owners identify suitable financing options to meet their needs and grow their business.
ANALYSIS OF INTEREST POINTS OF CURVELET COEFFICIENTS CONTRIBUTIONS OF MICROS...sipij
This paper focuses on improved edge model based on Curvelet coefficients analysis. Curvelet transform is
a powerful tool for multiresolution representation of object with anisotropic edge. Curvelet coefficients
contributions have been analyzed using Scale Invariant Feature Transform (SIFT), commonly used to study
local structure in images. The permutation of Curvelet coefficients from original image and edges image
obtained from gradient operator is used to improve original edges. Experimental results show that this
method brings out details on edges when the decomposition scale increases.
PERFORMANCE ANALYSIS OF CLUSTERING BASED IMAGE SEGMENTATION AND OPTIMIZATION ...cscpconf
Partitioning of an image into several constituent components is called image segmentation.
Myriad algorithms using different methods have been proposed for image segmentation. Many
clustering algorithms and optimization techniques are also being used for segmentation of
images. A major challenge in segmentation evaluation comes from the fundamental conflict
between generality and objectivity. As there is a glut of image segmentation techniques
available today, customer who is the real user of these techniques may get obfuscated. In this
paper to address the above described problem some image segmentation techniques are evaluated based on their consistency in different applications. Based on the parameters used quantification of different clustering algorithms is done.
This document describes an interactive multi-label image segmentation algorithm called "GrowCut" based on cellular automata. The algorithm can segment N-dimensional images with multiple labels. With modest user input of labeled pixels, GrowCut automatically segments the rest of the image in an iterative process. It requires less user effort than other techniques for moderately difficult images. The algorithm has advantages such as efficiency, parallelizability, and extensibility to generate new segmentation methods.
There exists a plethora of algorithms to perform image segmentation and there are several issues related to
execution time of these algorithms. Image Segmentation is nothing but label relabeling problem under
probability framework. To estimate the label configuration, an iterative optimization scheme is
implemented to alternately carry out the maximum a posteriori (MAP) estimation and the maximum
likelihood (ML) estimations. In this paper this technique is modified in such a way so that it performs
segmentation within stipulated time period. The extensive experiments shows that the results obtained are
comparable with existing algorithms. This algorithm performs faster execution than the existing algorithm
to give automatic segmentation without any human intervention. Its result match image edges very closer to
human perception.
ER Publication,
IJETR, IJMCTR,
Journals,
International Journals,
High Impact Journals,
Monthly Journal,
Good quality Journals,
Research,
Research Papers,
Research Article,
Free Journals, Open access Journals,
erpublication.org,
Engineering Journal,
Science Journals,
Image Segmentation Using Two Weighted Variable Fuzzy K MeansEditor IJCATR
Image segmentation is the first step in image analysis and pattern recognition. Image segmentation is the process of dividing an image into different regions such that each region is homogeneous. The accurate and effective algorithm for segmenting image is very useful in many fields, especially in medical image. This paper presents a new approach for image segmentation by applying k-means algorithm with two level variable weighting. In image segmentation, clustering algorithms are very popular as they are intuitive and are also easy to implement. The K-means and Fuzzy k-means clustering algorithm is one of the most widely used algorithms in the literature, and many authors successfully compare their new proposal with the results achieved by the k-Means and Fuzzy k-Means. This paper proposes a new clustering algorithm called TW-fuzzy k-means, an automated two-level variable weighting clustering algorithm for segmenting object. In this algorithm, a variable weight is also assigned to each variable on the current partition of data. This could be applied on general images and/or specific images (i.e., medical and microscopic images). The proposed TW-Fuzzy k-means algorithm in terms of providing a better segmentation performance for various type of images. Based on the results obtained, the proposed algorithm gives better visual quality as compared to several other clustering methods.
A Blind Steganalysis on JPEG Gray Level Image Based on Statistical Features a...IJERD Editor
This paper presents a blind steganalysis technique to effectively attack the JPEG steganographic
schemes i.e. Jsteg, F5, Outguess and DWT Based. The proposed method exploits the correlations between
block-DCTcoefficients from intra-block and inter-block relation and the statistical moments of characteristic
functions of the test image is selected as features. The features are extracted from the BDCT JPEG 2-array.
Support Vector Machine with cross-validation is implemented for the classification.The proposed scheme gives
improved outcome in attacking.
The document describes a new algorithm for long-term robust visual tracking of moving objects in video sequences. The algorithm aims to overcome challenges such as geometric deformations, partial or total occlusions, and recovery after the target leaves the field of vision. It does not rely on a probabilistic process or require prior detection data. Experimental results on difficult video sequences demonstrate advantages over recent trackers. The algorithm can be used in applications like video surveillance, active vision, and industrial visual servoing.
Engineering Research Publication
Best International Journals, High Impact Journals,
International Journal of Engineering & Technical Research
ISSN : 2321-0869 (O) 2454-4698 (P)
www.erpublication.org
The document presents a new method for segmenting MR brain images that combines a hidden Markov random field (HMRF) model with a hybrid metaheuristic optimization algorithm. The HMRF model uses adaptive parameters to balance contributions from different tissue classes during segmentation. The hybrid metaheuristic algorithm improves the quality of solutions during HMRF optimization by combining the cuckoo search and particle swarm optimization algorithms. Experimental results on simulated and real MR brain images show the proposed method achieves satisfactory segmentation performance for images with noise and intensity inhomogeneity.
Eugen Zaharescu-PROJECT STATEMENT-Morphological Medical Image Indexing and Cl...Eugen Zaharescu
This document outlines PhD. Assoc. Professor Eugen ZAHARESCU's project on developing an advanced mathematical model for analyzing and processing medical images using mathematical morphology methods. The project aims to 1) implement modern mathematical methods for solving problems in medical imaging, 2) collect, index and classify medical images and store them in a digital library, and 3) develop a software system for medical imaging using a formal model based on mathematical morphology. ZAHARESCU has done research on extending mathematical morphology to logarithmic image processing and developing new morphological operators for medical image enhancement.
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
This document presents a survey of contemporary research on image segmentation through clustering techniques. It discusses various clustering approaches including exclusive clustering (e.g. k-means), overlapping clustering (e.g. fuzzy c-means), hierarchical clustering, and probabilistic D-clustering. It provides details on the algorithms and steps involved in each technique. The paper analyzes different clustering methods for image segmentation and concludes that fuzzy c-means is superior but has high computational costs, while probabilistic D-clustering can avoid this issue.
Comparative analysis and implementation of structured edge active contour IJECEIAES
This paper proposes modified chanvese model which can be implemented on image for segmentation. The structure of paper is based on Linear structure tensor (LST) as input to the variant model. Structure tensor is a matrix illustration of partial derivative information. In the proposed model, the original image is considered as information channel for computing structure tensor. Difference of Gaussian (DOG) is featuring improvement in which we can get less blurred image than original image. In this paper LST is modified by adding intensity information to enhance orientation information. Finally Active Contour Model (ACM) is used to segment the images. The proposed algorithm is tested on various images and also on some images which have intensity inhomogeneity and results are shown. Also, the results with other algorithms like chanvese, Bhattacharya, Gabor based chanvese and Novel structure tensor based model are compared. It is verified that accuracy of proposed model is the best. The biggest advantage of proposed model is clear edge enhancement.
A combined method of fractal and glcm features for mri and ct scan images cla...sipij
Fractal analysis has been shown to be useful in image processing for characterizing shape and gray-scale
complexity. The fractal feature is a compact descriptor used to give a numerical measure of the degree of
irregularity of the medical images. This descriptor property does not give ownership of the local image
structure. In this paper, we present a combination of this parameter based on Box Counting with GLCM
Features. This powerful combination has proved good results especially in classification of medical texture
from MRI and CT Scan images of trabecular bone. This method has the potential to improve clinical
diagnostics tests for osteoporosis pathologies.
CELL TRACKING QUALITY COMPARISON BETWEEN ACTIVE SHAPE MODEL (ASM) AND ACTIVE ...ijitcs
The aim of this paper is to introduce a comparison between cell tracking using active shape model (ASM)
and active appearance model (AAM) algorithms, to compare the cells tracking quality between the two
methods to track the mobility of the living cells. Where sensitive and accurate cell tracking system is
essential to cell motility studies. The active shape model (ASM) and active appearance model (AAM)
algorithms has proved to be a successful methods for matching statistical models. The experimental results
indicate the ability of (AAM) meth
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
REMOVING OCCLUSION IN IMAGES USING SPARSE PROCESSING AND TEXTURE SYNTHESISIJCSEA Journal
The document presents a method for removing large occlusions from images using sparse processing and texture synthesis. It involves decomposing the image into structure and texture images using sparse representations. The occluded regions in the structure image are filled in using sparse reconstruction, which retains image structures. Texture synthesis is then performed on the texture image to fill in the occluded texture. Finally, the reconstructed structure and texture images are combined to produce the occlusion-free output image. The method is shown to effectively remove large occlusions while avoiding blurring and retaining both structures and textures. It outperforms other inpainting methods in terms of visual quality.
This paper proposes a new fuzzy similarity measure called Fuzzy Monotonic Inclusion (FMI) to measure similarity between images for image retrieval systems. The FMI approach segments images into regions, extracts features for each region, and maps the features into a fuzzy similarity model based on fuzzy inclusion. Experimental results on the Label Me image dataset show the FMI approach achieves higher precision than other methods like Unified Feature Matching and Fuzzy Histogram in identifying images by semantic class.
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Increase Efficiency with Enhanced Payroll ServicesInsideUp
This document provides information on selecting a payroll services vendor. It discusses the benefits of outsourcing payroll processing and highlights important criteria for choosing a vendor, such as reliability, communication, cost, and expertise. Several top payroll services companies are listed and described, including ADP, Paychex, Paylocity, and SurePayroll. The document emphasizes performing thorough reference checks and considering unbiased sources when evaluating potential vendors.
Ten Types of Business Financing You May Not Have TriedInsideUp
This document discusses 10 types of business financing options including factoring, merchant cash advances, lines of equity, equipment loans and leases, commercial mortgages, microloans, SBA loans, franchise financing, SBA 504 loans, and the loan application process. It provides brief descriptions of each financing type and notes some of their key terms and requirements. The overall document serves as a guide to help business owners identify suitable financing options to meet their needs and grow their business.
ANALYSIS OF INTEREST POINTS OF CURVELET COEFFICIENTS CONTRIBUTIONS OF MICROS...sipij
This paper focuses on improved edge model based on Curvelet coefficients analysis. Curvelet transform is
a powerful tool for multiresolution representation of object with anisotropic edge. Curvelet coefficients
contributions have been analyzed using Scale Invariant Feature Transform (SIFT), commonly used to study
local structure in images. The permutation of Curvelet coefficients from original image and edges image
obtained from gradient operator is used to improve original edges. Experimental results show that this
method brings out details on edges when the decomposition scale increases.
Financing options to help your business growInsideUp
This document provides an overview of various financing options available to help businesses grow, including business loans, merchant cash advances, home equity loans, equipment loans and leases, commercial mortgages, microloans, SBA loans, franchise financing, and SBA 504 loans. It discusses the types of each option, eligibility requirements, acceptable uses of funds, loan approval processes, and ensuring applications address the five C's of business credit.
NIRS-BASED CORTICAL ACTIVATION ANALYSIS BY TEMPORAL CROSS CORRELATIONsipij
In this study we present a method of signal processing to determine dominant channels in near infrared spectroscopy (NIRS). To compare measuring channels and identify delays between them, cross correlation is computed. Furthermore, to find out possible dominant channels, a visual inspection was performed. The
outcomes demonstrated that the visual inspection exhibited evoked-related activations in the primary somatosensory cortex (S1) after stimulation which is consistent with comparable studies and the cross correlation study discovered dominant channels on both cerebral hemispheres. The analysis also showed a relationship between dominant channels and adjacent channels. For that reason, our results present a new
method to identify dominant regions in the cerebral cortex using near-infrared spectroscopy. These findings have also implications in the decrease of channels by eliminating irrelevant channels for the experiment.
Time of arrival based localization in wireless sensor networks a non linear ...sipij
In this paper, we aim to obtain the location information of a sensor node deployed in a Wireless Sensor Network (WSN). Here, Time of Arrival based localization technique is considered. We calculate the position information of an unknown sensor node using the non- linear techniques. The performances of the techniques are compared with the Cramer Rao Lower bound (CRLB). Non-linear Least Squares and the Maximum Likelihood are the non-linear techniques that have been used to estimate the position of the unknown sensor node. Each of these non-linear techniques are iterative approaches, namely, Newton
Raphson estimate, Gauss Newton Estimate and the Steepest Descent estimate for comparison. Based on the
results of the simulation, the approaches have been compared. From the simulation study, Localization
based on Maximum Likelihood approach is having higher localization accuracy.
USING CRM SOLUTIONS TO INCREASE SALES AND RETAIN VALUABLE CUSTOMERSInsideUp
CRM is an important tool for creating business success. The better your customer relationships, the
easier it is to conduct business and generate revenue. Using available technology to improve
customer relations management simply makes good business sense.
If you need further information, or would like assistance in selecting a CRM provider, go to
http://www.insideup.com/compare/customer_relationship_management
The state-of-the-art Automatic Speech Recognition (ASR) systems lack the ability to identify spoken words if they have non-standard pronunciations. In this paper, we present a new classification algorithm to identify pronunciation variants. It uses Dynamic Phone Warping (DPW) technique to compute the
pronunciation-by-pronunciation phonetic distance and a threshold critical distance criterion for the classification. The proposed method consists of two steps; a training step to estimate a critical distance
parameter using transcribed data and in the second step, use this critical distance criterion to classify the input utterances into the pronunciation variants and OOV words.
The algorithm is implemented using Java language. The classifier is trained on data sets from TIMIT
speech corpus and CMU pronunciation dictionary. The confusion matrix and precision, recall and accuracy performance metrics are used for the performance evaluation. Experimental results show significant performance improvement over the existing classifiers.
The document presents a novel pre-processing approach to improve optical character recognition (OCR) accuracy on document images. The approach uses a stack of enhancement techniques including illumination adjustment, grayscale conversion, sharpening, and binarization. Specifically, it applies contrast limited adaptive histogram equalization to adjust illumination, uses a luminance algorithm for grayscale optimization, employs unsharp masking to enhance text, and performs Otsu's method for binarization. The proposed method aims to compensate for distortions and improve OCR accuracy in a nonparametric and unsupervised manner. It is evaluated on standard datasets and shown to significantly improve text detection and OCR performance.
Why the Right Merchant Account is Vital to Business GrowthInsideUp
This document provides information on setting up a merchant account to accept credit and debit card payments for a business. It discusses the advantages and disadvantages of accepting card payments, types of merchant accounts, factors considered in applications, fees involved, and tips for choosing a provider and maintaining the account in good standing. Example merchant account providers are also compared.
A modified distance regularized level set model for liver segmentation from c...sipij
Segmentation of organs from medical images is an active and interesting area of research. Liver segmentation incurs more challenges and difficulties compared with segmentation of other organs. In this paper we demonstrate a liver segmentation method for computer tomography images. We revisit the distance regularization level set (DRLS) model by deploying new balloon forces. These forces control the direction of the evolution and slow down the evolution process in regions that are associated with weak or without edges. The newly added balloon forces discourage the evolving contour from exceeding the liver
boundary or leaking at a region that is associated with a weak edge, or does not have an edge. Our
experimental results confirm that the method yields a satisfactory overall segmentation outcome. Comparing with the original DRLS model, our model is proven to be more effective in handling oversegmentation problems.
A divisive hierarchical clustering based method for indexing image informationsipij
In most practical applications of image retrieval, high-dimensional feature vectors are required, but current multi-dimensional indexing structures lose their efficiency with growth of dimensions. Our goal is to propose a divisive hierarchical clustering-based multi-dimensional indexing structure which is efficient in high-dimensional feature spaces. A projection pursuit method has been used for finding a component of the data, which data's projections onto it maximizes the approximation of negentropy for preparing essential information in order to partitioning of the data space. Various tests and experimental results on high-dimensional datasets indicate the performance of proposed method in comparison with others.
My Best friend Izaiah Kaine Breeden....recently left this world..and he got what he wanted..he is flying with the angels.....Rest In Paradise Sweet bubbles...Buttercup loves you</3
Routing in All-Optical Networks Using Recursive State Space Techniquesipij
In this papr, we have minimized the effects of failures on network performace, by using suitable Routing
and Wavelenghth Assignment(RWA) method without disturbing other performance criteria such as blocking
probability(BP) and network management(NM). The computation complexity is reduced by using Kalaman
Filter(KF) techniques. The minimum reconfiguration probability routing (MRPR) algorithm must be
able to select most reliable routes and assign wavelengths to connections in a manner that utilizes the light
path(LP) established efficiently considering all possible requests.
Blind Image Seperation Using Forward Difference Method (FDM)sipij
In this paper, blind image separation is performed, exploiting the property of sparseness to represent images. A new sparse representation called forward difference method is proposed. It is known that most of the independent component analysis (ICA) basis functions, extracted from images are sparse and gives unreliable sparseness measure. In the proposed method, the image mixture is first transformed to sparse images. These images are divided into blocks and for each block the sparseness measure ε0 norm is applied. The block having the most sparseness is considered to determine the separation matrix. The efficiency of the proposed method is compared with other sparse representation functions.
Combining Generative And Discriminative Classifiers For Semantic Automatic Im...CSCJournals
The object image annotation problem is basically a classification problem and there are many different modeling approaches for the solution. These approaches can be classified into two main categories such as generative and discriminative. An ideal classifier should combine these two complementary approaches. In this paper, we present a method achieving this combination by using the discriminative power of the neural networks and the generative nature of Bayesian networks. The evaluation of the proposed method on three typical image’s database has shown some success in automatic image annotation.
Edge detection algorithm based on quantum superposition principle and photons...IJECEIAES
The detection of object edges in images is a crucial step employed in a vast amount of computer vision applications, for which a series of different algorithms has been developed in the last decades. This paper proposes a new edge detection method based on quantum information, which is achieved in two main steps: (i) an image enhancement stage that employs the quantum superposition law and (ii) an edge detection stage based on the probability of photon arrival to the camera sensor. The proposed method has been tested on synthetic and real images devoted to agriculture applications, where Fram & Deutsh criterion has been adopted to evaluate its performance. The results show that the proposed method gives better results in terms of detection quality and computation time compared to classical edge detection algorithms such as Sobel, Kayyali, Canny and a more recent algorithm based on Shannon entropy.
International Journal of Engineering and Science Invention (IJESI)inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
New approach to multispectral image fusion based on a weighted mergeIAEME Publication
The document discusses a new approach for multispectral image fusion based on a weighted merge. [1] It models each image band as a mixture of Gaussian distributions using the Expectation Maximization algorithm. [2] The weighted coefficients for each band are extracted based on the similarity between the band and other bands, calculated using a cost function of the distance between the Gaussian distribution parameters. [3] This new approach aims to reduce redundancy while emphasizing complementary data between bands.
Colour Image Segmentation Using Soft Rough Fuzzy-C-Means and Multi Class SVM ijcisjournal
Color image segmentation algorithms in the literature segment an image on the basis of color, texture, and
also as a fusion of both color and texture. In this paper, a color image segmentation algorithm is proposed
by extracting both texture and color features and applying them to the One-Against-All Multi Class Support
Vector Machine classifier for segmentation. A novel Power Law Descriptor (PLD) is used for extracting
the textural features and homogeneity model is used for obtaining the color features. The Multi Class SVM
is trained using the samples obtained from Soft Rough Fuzzy-C-Means (SRFCM) clustering. Fuzzy set
based membership functions capably handle the problem of overlapping clusters. The lower and upper
approximation concepts of rough sets deal well with uncertainty, vagueness, and incompleteness in data.
Parameterization tools are not a prerequisite in defining Soft set theory. The goodness aspects of soft sets,
rough sets and fuzzy sets are incorporated in the proposed algorithm to achieve improved segmentation
performance. The Power Law Descriptor used for texture feature extraction has the advantage of being
dealt in the spatial domain thereby reducing computational complexity. The proposed algorithm is
comparable and achieved better performance compared with the state of the art algorithms found in the
literature.
This project is to retrieve the similar geographic images from the dataset based on the features extracted.
Retrieval is the process of collecting the relevant images from the dataset which contains more number of
images. Initially the preprocessing step is performed in order to remove noise occurred in input image with
the help of Gaussian filter. As the second step, Gray Level Co-occurrence Matrix (GLCM), Scale Invariant
Feature Transform (SIFT), and Moment Invariant Feature algorithms are implemented to extract the
features from the images. After this process, the relevant geographic images are retrieved from the dataset
by using Euclidean distance. In this, the dataset consists of totally 40 images. From that the images which
are all related to the input image are retrieved by using Euclidean distance. The approach of SIFT is to
perform reliable recognition, it is important that the feature extracted from the training image be
detectable even under changes in image scale, noise and illumination. The GLCM calculates how often a
pixel with gray level value occurs. While the focus is on image retrieval, our project is effectively used in
the applications such as detection and classification.
A digital image forensic approach to detect whether
an image has been seam carved or not is investigated herein.
Seam carving is a content-aware image retargeting technique
which preserves the semantically important content of an image
while resizing it. The same technique, however, can be used
for malicious tampering of an image. 18 energy, seam, and
noise related features defined by Ryu [1] are produced using
Sobel’s [2] gradient filter and Rubinstein’s [3] forward energy
criterion enhanced with image gradients. An extreme gradient
boosting classifier [4] is trained to make the final decision.
Experimental results show that the proposed approach improves
the detection accuracy from 5 to 10% for seam carved images
with different scaling ratios when compared with other state-ofthe-
art methods.
A new hybrid method for the segmentation of the brain mrissipij
The magnetic resonance imaging is a method which has undeniable qualities of contrast and tissue
characterization, presenting an interest in the follow-up of various pathologies such as the multiple
sclerosis. In this work, a new method of hybrid segmentation is presented and applied to Brain MRIs. The
extraction of the image of the brain is pretreated with the Non Local Means filter. A theoretical approach is
proposed; finally the last section is organized around an experimental part allowing the study of the
behavior of our model on textured images. In the aim to validate our model, different segmentations were
down on pathological Brain MRI, the obtained results have been compared to the results obtained by
another models. This results show the effectiveness and the robustness of the suggested approach.
1. The document discusses using statistical learning methods like Gaussian mixture models (GMM) and dynamic component allocation (DCA) for hyperspectral image classification.
2. GMM represents pixel spectra as mixtures of Gaussian distributions. DCA is an algorithm that dynamically adds and removes Gaussian components to better characterize the data during training.
3. The document outlines how DCA works, including merging similar Gaussian modes, splitting modes with high kurtosis, and pruning insignificant modes. These techniques aim to learn the appropriate number of mixture components from the data.
The document proposes an anomaly detection scheme for hyperspectral images based on a non-Gaussian mixture model using a Student's t-distribution. It estimates the background probability density function using a Bayesian approach that models each pixel as a mixture of Student's t distributions. The anomaly detection strategy then applies a generalized likelihood ratio test. Experimental results on real hyperspectral data show the proposed Bayesian Student's t mixture model can reliably estimate the background distribution and effectively detect anomalous objects, outperforming a Gaussian mixture model approach.
Texture classification of fabric defects using machine learning IJECEIAES
In this paper, a novel algorithm for automatic fabric defect classification was proposed, based on the combination of a texture analysis method and a support vector machine SVM. Three texture methods were used and compared, GLCM, LBP, and LPQ. They were combined with SVM’s classifier. The system has been tested using TILDA database. A comparative study of the performance and the running time of the three methods was carried out. The obtained results are interesting and show that LBP is the best method for recognition and classification and it proves that the SVM is a suitable classifier for such problems. We demonstrate that some defects are easier to classify than others.
A NOVEL APPROACH TO SMOOTHING ON 3D STRUCTURED ADAPTIVE MESH OF THE KINECT-BA...csandit
3-dimensional object modelling of real world objects in steady state by means of multiple point
cloud (pcl) depth scans taken by using sensing camera and application of smoothing algorithm
are suggested in this study. Polygon structure, which is constituted by coordinates of point
cloud (x,y,z) corresponding to the position of 3D model in space and obtained by nodal points
and connection of these points by means of triangulation, is utilized for the demonstration of 3D
models. Gaussian smoothing and developed methods are applied to the mesh consisting of
merge of these polygons, and a new mesh simplification and augmentation algorithm are
suggested for the over the 3D modelling. Mesh consisting of merge of polygons can be
demonstrated in a more packed, smooth and fluent way. In this study is shown that applied the
triangulation and smoothing method for 3D modelling, perform to a fast and robust mesh
structures compared to existing methods therewithal no remeshing is necessary for refinement
and reduction.
Sinusoidal Function for Population Size in Quantum Evolutionary Algorithm and...sipij
Fractal Image Compression is a well-known problem which is in the class of NP-Hard problems. Quantum Evolutionary Algorithm is a novel optimization algorithm which uses a probabilistic representation for solutions and is highly suitable for combinatorial problems like Knapsack problem. Genetic algorithms are widely used for fractal image compression problems, but QEA is not used for this kind of problems yet. This paper improves QEA whit change population size and used it in fractal image compression. Utilizing the self-similarity property of a natural image, the partitioned iterated function system (PIFS) will be found to encode an image through Quantum Evolutionary Algorithm (QEA) method Experimental results show that our method has a better performance than GA and conventional fractal image compression algorithms.
The document describes a new GIS tool that classifies lands around selected monuments using texture analysis and machine learning. The tool extracts sub-images around the monument, calculates texture features using GLCM, and classifies the lands using minimum distance classification to identify flat areas for constructing buildings like museums or visitor centers. Key steps include feature extraction using GLCM, calculating metrics like entropy and correlation, and classifying new images based on closest texture feature vectors in the training database.
Local Distance and Dempster-Dhafer for Multi-Focus Image Fusionsipij
The document describes a new method for fusing multi-focus images using Dempster-Shafer theory and local variability (DST-LV). The method calculates the quadratic distance between each pixel value and its neighboring pixels to determine local variability, which is then used to define mass functions in Dempster-Shafer theory. The proposed method was tested on 150 images and found to produce higher quality fused images compared to other popular fusion methods like PCA, DWT, and gradient bilateral, with lower RMSE values. Potential applications of the method include image fusion for drones, medical imaging, and food quality inspection.
LOCAL DISTANCE AND DEMPSTER-DHAFER FOR MULTI-FOCUS IMAGE FUSION sipij
This work proposes a new method of fusion image using Dempster-Shafer theory and local variability (DST-LV). This method takes into account the behaviour of each pixel with its neighbours. It consists in calculating the quadratic distance between the value of the pixel I (x, y) of each point and the value of all the neighbouring pixels. Local variability is used to determine the mass function defined in DempsterShafer theory. The two classes of Dempster-Shafer theory studied are : the fuzzy part and the focused part. The results of the proposed method are significantly better when comparing them to results of other methods.
Local Distance and Dempster-Dhafer for Multi-Focus Image Fusionsipij
This work proposes a new method of fusion image using Dempster-Shafer theory and local variability
(DST-LV). This method takes into account the behaviour of each pixel with its neighbours. It consists in
calculating the quadratic distance between the value of the pixel I (x, y) of each point and the value of all
the neighbouring pixels. Local variability is used to determine the mass function defined in DempsterShafer theory. The two classes of Dempster-Shafer theory studied are : the fuzzy part and the focused part.
The results of the proposed method are significantly better when comparing them to results of other
methods.
An Hypergraph Object Oriented Model For Image Segmentation And AnnotationCrystal Sanchez
This document presents a system for segmenting images into regions and annotating those regions semantically. It uses a hypergraph object-oriented model constructed on a hexagonal image structure to represent the image, segmentation results, and annotation information. The system segments images by treating it as a hypergraph partitioning problem based on color and syntactic features. Experimental results on the Berkeley Dataset show the method is robust.
Abstract: Many applications such as robot navigation, defense, medical and remote sensing performvarious processing tasks, which can be performed more easily when all objects in different images of the same scene are combined into a single fused image. In this paper, we propose a fast and effective method for image fusion. The proposed method derives the intensity based variations that is large and small scale, from the source images. In this approach, guided filtering is employed for this extraction. Gaussian and Laplacian pyramidal approach is then used to fuse the different layers obtained. Experimental results demonstrate that the proposed method can obtain better performance for fusion of
all sets of images. The results clearly indicate the feasibility of the proposed approach.
Similar to Energy minimization based spatially (20)
Build the Next Generation of Apps with the Einstein 1 Platform.
Rejoignez Philippe Ozil pour une session de workshops qui vous guidera à travers les détails de la plateforme Einstein 1, l'importance des données pour la création d'applications d'intelligence artificielle et les différents outils et technologies que Salesforce propose pour vous apporter tous les bénéfices de l'IA.
Accident detection system project report.pdfKamal Acharya
The Rapid growth of technology and infrastructure has made our lives easier. The
advent of technology has also increased the traffic hazards and the road accidents take place
frequently which causes huge loss of life and property because of the poor emergency facilities.
Many lives could have been saved if emergency service could get accident information and
reach in time. Our project will provide an optimum solution to this draw back. A piezo electric
sensor can be used as a crash or rollover detector of the vehicle during and after a crash. With
signals from a piezo electric sensor, a severe accident can be recognized. According to this
project when a vehicle meets with an accident immediately piezo electric sensor will detect the
signal or if a car rolls over. Then with the help of GSM module and GPS module, the location
will be sent to the emergency contact. Then after conforming the location necessary action will
be taken. If the person meets with a small accident or if there is no serious threat to anyone’s
life, then the alert message can be terminated by the driver by a switch provided in order to
avoid wasting the valuable time of the medical rescue team.
Null Bangalore | Pentesters Approach to AWS IAMDivyanshu
#Abstract:
- Learn more about the real-world methods for auditing AWS IAM (Identity and Access Management) as a pentester. So let us proceed with a brief discussion of IAM as well as some typical misconfigurations and their potential exploits in order to reinforce the understanding of IAM security best practices.
- Gain actionable insights into AWS IAM policies and roles, using hands on approach.
#Prerequisites:
- Basic understanding of AWS services and architecture
- Familiarity with cloud security concepts
- Experience using the AWS Management Console or AWS CLI.
- For hands on lab create account on [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
# Scenario Covered:
- Basics of IAM in AWS
- Implementing IAM Policies with Least Privilege to Manage S3 Bucket
- Objective: Create an S3 bucket with least privilege IAM policy and validate access.
- Steps:
- Create S3 bucket.
- Attach least privilege policy to IAM user.
- Validate access.
- Exploiting IAM PassRole Misconfiguration
-Allows a user to pass a specific IAM role to an AWS service (ec2), typically used for service access delegation. Then exploit PassRole Misconfiguration granting unauthorized access to sensitive resources.
- Objective: Demonstrate how a PassRole misconfiguration can grant unauthorized access.
- Steps:
- Allow user to pass IAM role to EC2.
- Exploit misconfiguration for unauthorized access.
- Access sensitive resources.
- Exploiting IAM AssumeRole Misconfiguration with Overly Permissive Role
- An overly permissive IAM role configuration can lead to privilege escalation by creating a role with administrative privileges and allow a user to assume this role.
- Objective: Show how overly permissive IAM roles can lead to privilege escalation.
- Steps:
- Create role with administrative privileges.
- Allow user to assume the role.
- Perform administrative actions.
- Differentiation between PassRole vs AssumeRole
Try at [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
Supermarket Management System Project Report.pdfKamal Acharya
Supermarket management is a stand-alone J2EE using Eclipse Juno program.
This project contains all the necessary required information about maintaining
the supermarket billing system.
The core idea of this project to minimize the paper work and centralize the
data. Here all the communication is taken in secure manner. That is, in this
application the information will be stored in client itself. For further security the
data base is stored in the back-end oracle and so no intruders can access it.
Software Engineering and Project Management - Software Testing + Agile Method...Prakhyath Rai
Software Testing: A Strategic Approach to Software Testing, Strategic Issues, Test Strategies for Conventional Software, Test Strategies for Object -Oriented Software, Validation Testing, System Testing, The Art of Debugging.
Agile Methodology: Before Agile – Waterfall, Agile Development.
Open Channel Flow: fluid flow with a free surfaceIndrajeet sahu
Open Channel Flow: This topic focuses on fluid flow with a free surface, such as in rivers, canals, and drainage ditches. Key concepts include the classification of flow types (steady vs. unsteady, uniform vs. non-uniform), hydraulic radius, flow resistance, Manning's equation, critical flow conditions, and energy and momentum principles. It also covers flow measurement techniques, gradually varied flow analysis, and the design of open channels. Understanding these principles is vital for effective water resource management and engineering applications.
1. Signal & Image Processing : An International Journal (SIPIJ) Vol.6, No.4, August 2015
DOI : 10.5121/sipij.2015.6406 71
ENERGY MINIMIZATION-BASED SPATIALLY
CONSTRAINED MIXTURE MODEL AND ITS
APPLICATION TO IMAGE SEGMENTATION
Zhiyong Xiao, Yunhao Yuan, Jianjun Liu and Jinlong Yang
School of Things Engineering, also with Key Laboratory of
Advanced Process Control for Light Industry (Ministry of Education),
Jiangnan University, Wuxi 214122, China.
ABSTRACT
A novel method is proposed for image segmentation based on probabilistic field theory. This model
assumes that the whole pixels of an image and some unknown parameters form a field. According to this
model, the pixel labels are generated by a compound function of the field. The main novelty of this model is
it consider the features of the pixels and the interdependent among the pixels. The parameters are
generated by a novel spatially variant mixture model and estimated by expectation-maximization (EM)-
based algorithm. Thus, we simultaneously impose the spatial smoothness on the prior knowledge.
Numerical experiments are presented where the proposed method and other mixture model-based methods
were tested on synthetic and real world images. These experimental results demonstrate that our algorithm
achieves competitive performance compared to other methods.
KEYWORDS
Probabilistic field theory, spatial constraints, parameter estimate, expectation-maximization(EM)
algorithm, mixture model, image segmentation.
1. INTRODUCTION
Image segmentation plays an important step in image processing and computer vision. The task
of image segmentation is to classify image pixels based on the coherence of certain features such
as intensity, color, texture, motion, location. Many methods have been previously proposed for
image segmentation [1], [2]. These methods can be classified into two groups: contour-based
approaches and region-based approaches.
The aim of the contour-based approaches is to find the boundaries of objects in an image. Many
contour-based segmentation algorithms have been developed, such as ”snakes” or active contours
[3], GVF (gradient vector flow) [4], VFC (vector field convolution) [5], Sobolev gradient [6].
Region-based approaches classify an image into multiple consistent classes. The main research
directions are focused on graph cut approaches [7], mean shift algorithm [8] and clustering based
2. Signal & Image Processing : An International Journal (SIPIJ) Vol.6, No.4, August 2015
72
methods [9]. In this paper, we will focus our attention on clustering methods for image
segmentation.
The mixture model (MM) [10], [11] is the most commonly used model for clustering. In this
approach, the pixels are viewed as coming from a mixture of probability distributions, each
representing a different component. The parameters of the probability distributions can be
estimated very efficiently through maximum likelihood (ML) using the expectation-maximization
(EM) algorithm [12], [13]. When the parameters have been estimated, each pixel can be assigned
using the maximum a posteriori (MAP) rule.
The main advantage of the MM is that it has simple mathematical form and it is easy to
implement. Thus, it has been used successfully in a number of applications [14], [15]. A
drawback of the MM is that it doesn’t use the spatial information about the data [16]. In order to
overcome this drawback, the spatial information need to be incorporated into the model and
several approaches were proposed.
Markov random field (MRF) [17], [18], [19] theory provides a convenient and consistent way to
account for spatial dependencies between image pixels. The spatially variant finite mixture model
(SVFMM) [16], [20], [21] assumes that the prior distribution form a Markov field. The prior
distribution of the SVFMM depends on then neighboring pixels and their corresponding
parameters. Thus, these models work well in medical images for segmentation [22]. However, the
drawback is that they are computationally expensive to estimate parameters. Various
approximations have been introduced to tackle this problem. The gradient projection algorithm
[16] was proposed to implement the M-step of the EM algorithm. A closed-form update equation
followed by an efficient projection method [20] is used to estimate the parameters.
In [23], a new family prior distributions was proposed for SVFMM. The proposed prior
probability is based on Gauss-Markov random field, which control the degree of smoothness for
each cluster. The parameters can be estimated in closed form via the MAP estimation using the
EM algorithm. In [24], the Dirichlet Compound Multinomial-based Spatially Variant Finite
Mixture Model (DCM-SVFMM) was proposed. This model exploits the Dirichlet compound
multinomial probability to model the probabilities of class labels and the Gauss-Markov random
field to model the spatial prior.
Another approach to take the commonality of location into account is extension of mixture model
[25]. This method is quite similar to the mixture model and easy to implement. It introduces a
novel prior distribution which acts like a mean filter to incorporate the spatial relationships
between neighboring pixels. In [21], a spatially constrained generative model was proposed for
segmentation. It assumes the prior distributions share similar parameters for neighboring pixels
and introduces a pseudo-likelihood quantity to measure the similarity between neighboring pixels
priors.
In this paper, we introduce a probabilistic field theory-based model for image segmentation. This
model assumes that the whole given image and some unknown parameters construct a field and
the labels of image pixels are functions of the field. Therefore, the mixture model-based
segmentation can be considered as a special case of the proposed framework. Furthermore, in
order to incorporate the spatial relationships between the neighboring pixels, we propose a novel
spatially variant mixture model to estimate the parameters.
3. Signal & Image Processing : An International Journal (SIPIJ) Vol.6, No.4, August 2015
73
We also propose a novel approach to properly incorporate the spatial relationships between the
neighboring pixels into the priors. The proposed probabilistic field theory-based model account
for the spatially local interaction. Numerical experiments are presented to assess the performance
of the proposed model both with simulated data and real natural images. We compare the results
of our proposed method to other methods both visually and quantitatively.
The remainder of this paper is organized as follows. In Section II, we present a brief introduction
of the image segmentation problem and some mixture model-based methods. We describe the
details of the proposed model and the algorithm in Section III and Section IV, respectively. The
experimental results are shown in Section V.
2. A REVIEW OF MIXTURE MODEL-BASED METHODS FOR IMAGE
SEGMENTATION
In this section, we discuss four commonly used mixture model-based methods for image
segmentation. The first group is the standard mixture model in which each pixel is considered
independent of its neighbors. In order to incorporate the spatial information into the mixture
model, a spatially variant finite mixture model (SVFMM) is proposed, which assumes the prior
distributions that generate the pixel labels form a Markov random field. Third one is an extension
of standard mixture model. The last group is an extension of the SVFMM.
Firstly we define the image multi-class segmentation problem. Given an image X which has N
pixels, we suppose it could be partitioned into K classes Uk; k = 1,2,...,K with the following
properties
Then we provide a brief description of the multi-class image labeling problems. Consider a set of
random variables E = { e1,e2,…,eN }. The random field is defined on the image X, where each
pixel i is associated with a random variable ei. Each variable ei = (ei1, ei2,...,eik) represents a region
assignment for pixel i, where eik is either 0 or 1. P(eik = 1 | xi) for all classes, it is assigned to the
class with the largest posterior probability
A. Mixture Model
The mixture model [11] assumes a common prior distribution π which is a discrete distribution
with K states, whose parameters πk, k = 1, 2, …, K are unknown, and holds:
where we see that the prior πk has no dependence on pixel index i and the prior distribution
satisfies the constraints
4. Signal & Image Processing : An International Journal (SIPIJ) Vol.6, No.4, August 2015
74
The mixture model assumes that the density function at a pixel observation xi is given by
where P(xi|eik = 1, θ) is a density conditional on the class label k, i.e.:
where θk is the parameter of the kth component distribution.
After the mixture model has been fitted, the next step is to estimate the parameters using the
maximum likelihood approach. The log-likelihood function of all observations X = {x1, x2,…,xN}
for the N pixels is given by
The log-likelihood function is considered as a function of the parameters θ where the data X is
fixed. The parameters are estimated by the EM algorithm [12], [13].
B. Spatially Variant Finite Mixture Model
In the mixture model, the pixels are considered independent. In order to account for the spatial
dependence between image pixels, the spatially variant finite mixture model(SVFMM) [16], [20]
was proposed. The density function at an observation xi is given by
where the parameter πik is the prior distribution of the pixel xi belonging to the kth class, which
satisfies the constraints:
The SVFMM assumes that the prior distribution that generate the pixel labels form a Markov
random field. The random field of the prior distribution is defined [16]:
5. Signal Image Processing : An International Journal (SIPIJ) Vol.6, No.4, August 2015
75
where Z is a normalizing constant called the partition function, β is a regularization constant. The
function U(π) denotes the clique potential function on π which considers the spatial interaction
information and can be computed as follows:
where N(i) is the neighborhood of i.
The log-likelihood function becomes:
The above log-likelihood function is more complex than the standard mixture model. Since there
are N different πi distributions, one for each pixel i. The M-step of EM algorithm cannot be
applied directly to the prior distribution πi. In [16], [20] and [26], a large amount of computational
effort goes in the estimation of the πi priors in the M-step.
In standard mixture model, only the intensity value of pixel determines the segmented result.
While the segmented result of SVFMM is based on intensity value and the spatial information.
C. A Class-Adaptive Spatially Finite Mixture Model
In this section, an extension of Spatially Variant Finite Mixture Model [23] is presented. In this
model, the prior probability is considered to be based on Gauss-Markov random field. The novel
prior probability allows their parameters to be estimated in closed form using the EM
methodology [16], [20] and [26]. The Gauss-Markov random field prior probability for π in (11)
is given by
where N(i) is the neighborhood for the pixel i, the parameters capture the spatial smoothness
of class k. In this Class-adaptive spatially mixture model (CA-SVFMM), the prior can capture the
smoothness of each class in different degrees and adapts better to the data.
The predictor of the prior probability πik is defined by the mean of its spatial neighbors as
where N(i) is the number of pixels in the neighborhood.
The prior statistical assumption is based on
6. Signal Image Processing : An International Journal (SIPIJ) Vol.6, No.4, August 2015
76
where is the prediction error, N (0, β2
) is Gaussian random variables with zero
mean and variance β2
.
Moreover, the parameter β of the prior probability can also capture the smoothness in all
directions. In [23], the authors introduce parameters which expresses not only the class
variance for class k but also the variance within class k at a certain spatial direction d. In this
case, the prior is given by
Using the Gauss-Markov random field-based prior probability, the EM algorithm [16], [20] and
[26] is utilized for estimating the parameters. The details of the algorithm is presented in [23].
3. PROPOSED METHOD
In this section, we introduce a probabilistic field theory-based model, which assumes the whole
image pixels and some unknown parameters construct a field. The mixture model-based methods
can be considered as special approaches for estimating parameters. Furthermore, we propose a
new spatially constrained mixture model for parameter estimation. In order to consider the
interaction between the posterior probabilities, we also proposed a spatially constrained
probabilistic field theory-based model for image segmentation.
A. Probabilistic Field Theory-based Model
Field theory [27], [28] is a classic framework for the analysis of social behavior. Employing the
conventions of mathematics, the Lewin’s equation represents the idea that human behavior is
related both to one’s personal characteristics and to the environment
where B stands for the individual’s publicly observable behavior, P stands for all the causal
factors of the individual person, and E stands for all the causal factors of the environment. This
equation is graphically shown in Fig.1(a).
We propose an extension of the Lewin’s equation using the probability theory.
7. Signal Image Processing : An International Journal (SIPIJ) Vol.6, No.4, August 2015
77
where P(B) is the probability of the behavior, and the behavior is a function of the probability
P(B), as shown in Fig.1(b).
The above image segmentation problem in (1) can be considered as a process of labeling:
For the image labeling problem, the random variable ei = (ei1, ei2, …, eik) can be considered as the
behavior of pixel i. We define a set of parameters θ as the environment factors. So the
probabilistic field theory for image segmentation is:
where P(ei) = (P(ei1 = 1 | xi),..., P(eik = 1 | xi)) is a vector, which presents the probabilities of the
labels for the pixel i. P(eik = 1 | xi) is the probability of the pixel i belonging to the class k. xi is
the observed value of pixel i. It is graphically shown in Fig.1(c).
In our method, we define function f using the Bayesian rule:
And for each pixel i, we define function g using the maximum a posterior rule(MAP) to assign it:
The probabilistic field theory-based model is complete only if its parameters are determined.
Thus, before we use our model to segment an image, we need to estimate the parameters. The
mixture modelbased methods are commonly used approaches for parameters estimation. In this
paper, we present a new spatially constrained mixture model to estimate the parameters.
B. A New Spatially Constrained Mixture Model for Parameter Estimation
We propose a new family of smoothness priors to take into account the spatial statistical
information.
The weight function is defined as in the extension of the mixture model [25]
8. Signal Image Processing : An International Journal (SIPIJ) Vol.6, No.4, August 2015
78
We denote a new prior distribution
where
is the normalizing factor, h(i, j) measures the geometric closeness (Euclidean
distance) between pixel i and j. In this paper, the geometric closeness h is Gaussian function of
the magnitude of the relative position vector of pixel j from pixel i, || ui – uj ||.
(a) (b) (c)
Fig. 1. (a) Field Theory for analysis of social behavior. (b) Probabilistic Field Theory for analysis of social
behavior. (c) Probabilistic Field Theory for image segmentation.
The geometric closeness function is given as a decreasing function h when the distance || ui - uj ||
increases:
9. Signal Image Processing : An International Journal (SIPIJ) Vol.6, No.4, August 2015
79
where σg is parameter, which defines the desired structural locality between neighboring pixels,
ui is the location of the pixel i.
The density function of this mixture model at an observation xi is given by
Fig. 2. A Spatially Constrained Probabilistic Field Theory for image segmentation.
Using the above proposed spatially constrained mixture model, the log-likelihood function can be
computed as follow
10. Signal Image Processing : An International Journal (SIPIJ) Vol.6, No.4, August 2015
80
The parameters are estimated by maximizing the above likelihood L. The algorithm will be
presented in the next section.
C. A Spatially Constrained Probabilistic Field Theory-based Model for Image Segmentation
The above proposed mixture model utilizes a novel approach to incorporate the spatial
relationships between neighboring pixels into the prior distribution. However, in the probabilistic
field theory-based model for image segmentation, we assume that there is no interaction among
the posterior probabilities for different pixels. In fact, the posterior probability of one pixel can be
influenced by the posterior probabilities of its neighbor pixels. Thus, we propose a spatially
constrained probabilistic field theory based model for image segmentation. In the proposed
model, to take the interactions of the posterior distribution into account, we add a function F
between function f in (16) and function g in (16) to incorporate the dependence of observed data
(image pixels).
Motivated by the above we introduce a new function F, which model the interaction over the
probability distribution of the assignment
where N(i) denotes a set of pixels adjacent to pixel i with respect to a neighborhood system.
The function F is defined as
where Zi is the normalizing factor and h(i, j) is the function measuring the geometric closeness,
which are defined in (21).
The proposed model for image segmentation is present as follows:
The model is graphically shown in Fig.2. Compare to the Fig.1(c), it is easy to see that the
difference between the probabilistic field theory-based model and the proposed model is the
function F. The new posterior probability of the proposed model acts like a mean filter. For that
reason, the image segmentation result obtained by employing the proposed model is robust with
respect to noise.
11. Signal Image Processing : An International Journal (SIPIJ) Vol.6, No.4, August 2015
81
When the parameters have been estimated using the proposed spatially constrained mixture
model, the above spatially constrained probabilistic field theory-based model for image
segmentation contains three steps:
Step 1 :
Step 2 :
Step 3 :
4. ALGORITHM
In this section, we first adjust the parameters to maximize the likelihood function L
in (23) using a modified EM algorithm. Then we compute the function f, F and g using the
estimated parameters.
A. Parameter Estimation Algorithm
In general, we assume that the probability distribution of the mixture model is Gaussian
distribution, which is given by:
In this paper, we use the 3x3 window. Thus, the free parameter of the geometric function h is
set to 12 and the normalizing factor Zi is constant and set to 8.52,
We use K-means algorithm to initialize the mean and covariance for k = 1, 2, …, K. For
the mth iteration, the proposed algorithm can be summarized as follows.
12. Signal Image Processing : An International Journal (SIPIJ) Vol.6, No.4, August 2015
82
Step 5: Check for convergence or the maximum iteration. If the criterion is not satisfied, then set
and return to step 1.
When the criterion is satisfied, we define the estimated parameters as
B. Algorithm for Proposed Image Segmentation Model
When the parameters have been estimated, the next step is to determine to which class the pixel i
should be assigned. Based on the proposed model, the algorithm can be summarized as follow.
Step 1: Calculate function f
Step 2 : Calculate function F
13. Signal Image Processing : An International Journal (SIPIJ) Vol.6, No.4, August 2015
83
Step 3 : Calculate function g
5. EXPERIMENTS
In this section, we provide experimental results on synthetic and real-world images for evaluating
our proposed algorithm.
(a) (b) (c)
(d) (e) (f)
Fig. 3. Experiment of synthetic image. (a) Original image. (b) Corrupted original image with Gaussian
noise(0 mean, 0.02 variance). (c) Standard Mixture Model(MCR=19.24%). (d) SVFMM (MCR=10.18%).
(e) CA-SVFMM (MCR=9.58%). (f) Proposed Method (MCR=6.86%).
A. Parameters estimation
The algorithm has three kinds of parameters, the number of components K, the parameters of
prior distribution π, and the parameters of all the components θ. In this work we assume that the
number of components K is given to us for a particular image. The choice of the parameters
affect the quality of image segmentation directly. In this section, we will estimate the parameters
of standard Gaussian mixture model(termed GMM), SVFMM, the extension of mixture
model(termed EMM), the class-adaptive SVFMM(termed CA-SVFMM), and the proposed
model. We compare the efficiency of the proposed model with the others.
We use a synthetic image (256 x 256 image resolution) similar to the one used in [17], [20] (see
Fig.3(a)). The simulated image has three classes (K=3) sampled from an MRF model using the
14. Signal Image Processing : An International Journal (SIPIJ) Vol.6, No.4, August 2015
84
Gibbs sampler [18]. The gray levels for the three classes are 55, 115 and 225 respectively.
Fig.3(b) shows the same image with added Gaussian noise (mean=0, variance=0.02).
(a) (b) (c)
(d) (e) (f)
Fig. 4. Experiment on five-class synthetic image. (a) Original image. (b) Corrupted original image with
Gaussian noise(0 mean, 0.01 variance). (c) Standard Mixture Model(MCR=48.10%). (d) SVFMM
(MCR=34.31%). (e) CASVFMM (MCR=31.42%). (f) Proposed Method (MCR=26.23%).
We use the EM-based algorithm for estimating the parameters of standard GMM, EMM,
SVFMM, CA-SVFMM, and our proposed model. Table.I shows the estimated parameters of
Fig.3(b). From this table, we can see that estimated parameters of the EMM, SVFMM, CA-
SVFMM and our proposed model are equivalent.
The running times for estimating parameters of the five model are reported in Table.II. The
computer used is a PC with Intel Core i3 CPU 3.07GHz and 8.00G RAM. From this table, we can
see that the standard GMM run fastest. But its parameters are quiet worse than the others. The
running time of CA-SVFMM is the slowest.
B. Synthetic Images
We illustrate the performance of our proposed model with references for synthetic image
segmentation. Firstly, we show a comparison of the standard GMM, EMM, SVFMM, CA-
SVFMM and our proposed model for the above three-class image segmentation. We use the
misclassification ratio(MCR) [17] to measure the segmentation accuracy, which is defined as
15. Signal Image Processing : An International Journal (SIPIJ) Vol.6, No.4, August 2015
85
Fig.3(a) shows original synthetic image and Fig.3(b) shows the original image with added
Gaussian noise(mean=0, variance=0.02). Fig.3(c)-(g) show the segmentation results obtained by
standard GMM, EMM, SVFMM, CA-SVFMM and our proposed model. Compared with these
five algorithm, it is easy to see that our algorithm obtains the best results. The results obtained
with varying levels of noise are presented in Table III. As can be seen, the proposed method has a
lower MCR compared with the other methods.
In the second experiment, we illustrate our algorithm on a five-class synthetic image. We use a
simulated image (256 x 256 image resolution) similar to the above three-class image(see
Fig.4(a)). The gray levels for the five classes are 24, 83, 130, 180 and 230 respectively. In
Fig.4(b) we show the image after adding Gaussian noise with 0 mean, 0.02 variance. Fig.4(c)-(g)
show the segmentation results obtained by standard GMM, EMM, SVFMM, CA-SVFMM and
our proposed model, respectively. Among these methods, the proposed method classify the image
with lowest MCR. Moreover, as compared with the other methods, the result obtained by
employing our method, as shown in Fig.4(g), demonstrates a higher degree of robustness with
respect to the given level of noise.
Form the above examples we can conclude here that the segmentation results of the proposed
model are significant quantitative and with the higher degree of robustness with respect to noise
C. Real World Images
We have also evaluated and compared the proposed algorithms for the segmentation of RGB
natural images. The real world images are obtained from the Berkeley’s image segmentation
database [29]. This database consists a set of natural images and their ground truth segmentation
results provided by different individuals. An error measure (Global Consistency Error) [29] has
been defined to quantify the consistency between different segmentations. Global Consistency
Error(GCE) produces a real-valued output in the range [0; 1], where zero signifies no error. In
this paper, we use the GCE to evaluate the performance of our proposed algorithm. Furthermore,
we use the probabilistic Rand (PR) index [30] to obtain the comparisons of segmentations of the
Berkeley database. It contains values in the range [0,1], with values closer to 1 indicating a good
result.
In the first experiment, we tried to segment a color image (482 321 image resolution) shown
Fig.5(a) into three classes: the blue sky, the white church wall and the red cupola. Images in
Fig.5(b), (c), (d), (e) and (f) show the results obtained by using the standard mixture model
(MM), the extension of MM, SVFMM, CA-SVFMM and our proposed model, respectively. As
can be seen, the proposed method outperforms other methods with a higher PR. Also the result of
segmentation of our proposed method is quite good. We can observe that the white church wall
are slightly better preserved by our method. Furthermore, the bottom right red window and the
two crosses are more accurately extracted by our method.
In order to further test the accuracy and determine the efficiency of the proposed method, we do
experiment on another real world image. Fig.6(a) shows a color image (321 481 image
resolution). We tried to segment the image into three classes. Images in Fig.6(b) (c), (d), (e) and
(f) show the results obtained by using the standard MM, the extension of MM, SVFMM, CA-
SVFMM and our proposed model, respectively. Compared with these methods, the proposed
method outperforms other methods with a higher PR and a lower GCE. The image shown in
16. Signal Image Processing : An International Journal (SIPIJ) Vol.6, No.4, August 2015
86
Fig.6(g) is obtained by corrupting the original image in Fig.6(a) with Gaussian noise (0 mean,
0.01 variance). Fig.6(h) (i), (j), (k) and (l) present the segmentation results obtained by employing
the standard MM, the extension of MM, SVFMM, CA-SVFMM and our proposed model,
respectively. From visual inspection of the results, our proposed method demonstrates a better
performance compared to the other methods. Moreover, the result of the proposed method is with
a higher PR and a lower GCE.
In the final experiment, a set of real world images were used to evaluate the performance of the
proposed method against standard MM, extension of MM, SVFMM and CA-SVFMM methods.
Table IV presents the cumulative results obtained for all methods, for the given set of images. As
can be easily seen, the proposed method outperforms the other methods with a higher PR.
(a)
(b) (c)
(d) (e)
Fig. 5. Experiment on color image. (a) Original image. (b) Standard Mixture Model (PR=0.830).
(c) SVFMM ((PR=0.836). (d) CA-SVFMM (PR=0.838). (e) Proposed Method (PR=0.841).
17. Signal Image Processing : An International Journal (SIPIJ) Vol.6, No.4, August 2015
87
(a) (b) (c) (d) (e)
(f) (g) (h) (i) (j)
Fig. 6. Experiment on color image. (a) Original image and its three-class segmentation by (b) MM
(PR=0.693, GCE=0.286), (c) SVFMM (PR=0.713, GCE=0.267), (d) CASVFMM (PR=0.716,
GCE=0.255), (e) Proposed Method (PR=0.719, GCE=0.245). (f) Corrupted original image with Gaussian
noise(0 mean, 0.01 variance) and its threeclass segmentation by (g) MM (PR=0.697,GCE=0.342), (h)
SVFMM (PR=0.709, GCE=0.302), (i) CASVFMM (PR=0.711, GCE=0.287), (j) Proposed Method
(PR=0.716, GCE=0.266).
6. CONCLUSION
In this paper, we have presented a spatially constrained probabilistic field theory-based model for
image segmentation. The proposed model postulates that the whole pixels and some unknown
parameters form a field and the label of a particular pixel is a compound function of its features,
its neighbors’ features and some parameters. To estimate the parameters, we have proposed a
novel approach to incorporate the spatial relationships between neighboring pixels into the
mixture model. The proposed method has been tested with many synthetic and real world images,
the experimental results demonstrate the excellent performance of the proposed model in
segmenting the images.
REFERENCES
[1] J. Freixenet, X. Munoz, D. Raba, J. Marti, and X. Cufi, “Yet another survey on image segmentation:
region and boundary information integration,” 7th European Conference on Computer Vision, pp.
508–422, 2002.
[2] N. Pal and S. Pal, “A review of image segmentation techniques,” Pattern Recognition, vol. 26, pp.
1277–1294, 1993.
[3] M. Kass, A. Witikin, and D. Terzopoulos, “Snakes: Active contour models,” Int. J. Comput. Vis., vol.
1, pp. 321–331,1988.
[4] C. Xu and J. Prince, “Snakes, shapes, and gradient vector flow,” IEEE Trans. Image Process., vol. 7,
no. 3, pp. 359–369,1998.
[5] B. Li and S. Acton, “Active contour external force using vector field convolution for image
segmentation,” IEEE Trans Image Process., vol. 16, pp. 2096–2106, 2007.
18. Signal Image Processing : An International Journal (SIPIJ) Vol.6, No.4, August 2015
88
[6] G. Sundaramoorthi, A. Yezzi, and A. Mennucci, “Sobolev active contours,” International Journal of
Computer Vision, vol. 73, pp. 109–120, 2005.
[7] J. Shi and J. Malik, “Normalized cuts and image segmentation,” IEEE Trans. Pattern Anal. Mach.
Intell., vol. 22, no. 8, pp. 888–905, 2000.
[8] D. Comaniciu and P. Meer, “Mean shift: A robust approach toward feature space analysis,” IEEE
Trans. Pattern Anal. Mach. Intell., vol. 24, no. 5, pp. 603–619, 2002.
[9] R. Xu and I. D. Wunsch, “Survey of clustering algorithms,” IEEE Trans. Neural Netw., vol. 16, no. 3,
pp. 645–678, 2005.
[10] G. McLachlan, “Finite mixture models,” Hoboken, NJ: Wiley, 2000.
[11] F. Picard, “An introduction to mixture model,” Statistics for Systems biology Research Report No.7,
2007.
[12] A. Dempster, N. Laird, and D. Rubin, “Maximum likelihood from incomplete data via the EM
algorithm,” Journal of the Royal Statistical Society.Series B, vol. 39, no. 1, pp. 1–38, 1977.
[13] R. Neal and G. Hinton, “A view of the EM algorithm that justifies incremental, sparse, and other
variants,” in Learning in Graphical Models, M.Jordan, Ed.Kluwer Academic Publishers, pp. 355–368,
1998.
[14] K. Blekas, D. I. Fotiadis, and A. Likas, “Greedy mixture learning for multiple motif discovery in
biological sequences,” Bioinformatics, vol. 19, no. 5, pp. 607–617, 2003.
[15] H. Greenspan, G. Dvir, and Y. Rubner, “Context-dependent segmentation and matching in image
databases,,” Comput. Vis. Image Understand., vol. 93, no. 1, pp. 86–109, 2004.
[16] S. Sanjay-Gopal and T.J.Hebert, “Bayesian pixel classification using spatially variant finite mixture
and the generalized EM alogirhm,” IEEE Trans. Image Process., vol. 7, no. 7, pp. 1014–1028, 1998.
[17] Y. Zhang, M. Brady, and S. Smith, “Segmentation of brain MR images through a hidden markov
random field model and the epectation-maximiztion algorithm,” IEEE Trans. Med. Imag., vol. 20, no.
1, pp. 45–57, 2001.
[18] S. Geman and D. Geman, “Stochastic relaxation, gibbs distributions, and the bayesian restoration of
images,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 6, pp. 721–741, 1984.
[19] S. Z. Li, “Markov random field modeling in computer vision,” London,UK: Springer-Verlag, 2001.
[20] K.Blekas, A.Likas, N.P.Galatsanos, and I.E.Lagaris, “A spatially constrained mixture model for
image segmentation,” IEEE Trans. Neural Netw., vol. 16, no. 2, pp. 494–498, 2005.
[21] A. Diplaros, N. Vlassis, and T. Gevers, “An spatilly constrained generative mode and an em
algorithml for image segmentation,” IEEE Trans. Neural Netw., vol. 18, no. 3, pp. 798–808, 2007.
[22] M. Woolrich, T. Behrens, C. Beckmann, and S. Smith, “Mixture models with adaptive spatial
regularization for segmentation with an application to fMRI data,” IEEE Trans. Med. Imag., vol. 24,
no. 1, pp. 1–11, 2005.
[23] C. Nikou, N. P. Galatsanos, and A. Likas, “A class-adaptive spatially variant mixture model for
image segmentation,” IEEE Trans. Image Process., vol. 16, no. 4, pp. 1121–1130, 2007.
[24] C. Nikou, A. Likas, and N. P. Galatsanos, “A bayesian framework for image segmentation with
spatially varying mixtures,” IEEE Trans. Image Process., vol. 19, no. 9, pp. 2278–2289, 2010.
[25] T. M. Nguyen, Q. Wu, and S. Ahuja, “An extension of the standard mixture model for image
segmentation,” IEEE Trans. Neural Netw., vol. 21, no. 8, pp. 1326–1338, 2010.
[26] J. L. Marroquin, E. A. Santana, and S. Botello, “Hidden markov measure field models for image
segmentation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 15, no. 1, pp. 1380–1387, 2003.
[27] K. Lewin, “Resolving social conflicts; selected papers on group dynamics,” Gertrude W. Lewin (ed.)
New York: Harper and Row, 1948.
[28] ——, “Field theory in social science; selected theoretical papers,” D. Cartwright (ed.). New York:
Harper and Row., 1951.
[29] D. Martin, C. Fowlkes, D. Tal, and J. Malik, “A database of human segmented natural imaes and its
application to evaluation segmentation algorithms and mesauring ecological statistics,” in Proc. Int.
Conf. Comput. Vision, pp. 416–423, jul.2001.
[30] C. P. R. Unnikrishnan and M. Hebert, “A measure for objective evaluation of image segmentation
algorithms,,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, vol. 3, pp. 34–41, Jnn.
2005.