1) The document discusses using a cluster of computers to analyze and classify massive biomedical image data more efficiently.
2) It describes parallelizing an MRF-Gibbs classification algorithm across the cluster to segment and classify images from the Visible Human Project dataset, which contains high resolution 3D imagery totalling over 4200 MB.
3) The cluster is made up of 8 PC workstations connected by an ATM switch and Ethernet, and supports two programming interfaces (MPI and Paradise) to implement parallel algorithms for improved processing throughput of the large image datasets.
CNN FEATURES ARE ALSO GREAT AT UNSUPERVISED CLASSIFICATION cscpconf
This paper aims at providing insight on the transferability of deep CNN features to
unsupervised problems. We study the impact of different pretrained CNN feature extractors on
the problem of image set clustering for object classification as well as fine-grained
classification. We propose a rather straightforward pipeline combining deep-feature extraction
using a CNN pretrained on ImageNet and a classic clustering algorithm to classify sets of
images. This approach is compared to state-of-the-art algorithms in image-clustering and
provides better results. These results strengthen the belief that supervised training of deep CNN
on large datasets, with a large variability of classes, extracts better features than most carefully
designed engineering approaches, even for unsupervised tasks. We also validate our approach
on a robotic application, consisting in sorting and storing objects smartly based on clustering
Dual Tree Complex Wavelet Transform, Probabilistic Neural Network and Fuzzy C...IJAEMSJORNAL
The venture suggests an Adhoc technique of MRI brain image classification and image segmentation tactic. It is a programmed structure for phase classification using learning mechanism and to sense the Brain Tumor through spatial fuzzy clustering methods for bio medical applications. Automated classification and recognition of tumors in diverse MRI images is enthused for the high precision when dealing with human life. Our proposal employs a segmentation technique, Spatial Fuzzy Clustering Algorithm, for segmenting MRI images to diagnose the Brain Tumor in its earlier phase for scrutinizing the anatomical makeup. The Artificial Neural Network (ANN) will be exploited to categorize the pretentious tumor part in the brain. Dual Tree-CWT decomposition scheme is utilized for texture scrutiny of an image. Probabilistic Neural Network (PNN)-Radial Basis Function (RBF) will be engaged to execute an automated Brain Tumor classification. The preprocessing steps were operated in two phases: feature mining by means of classification via PNN-RBF network. The functioning of the classifier was assessed with the training performance and classification accuracies.
A Comparative Case Study on Compression Algorithm for Remote Sensing ImagesDR.P.S.JAGADEESH KUMAR
This document summarizes research on compression algorithms for remote sensing images. It begins with an abstract describing the challenges of transmitting large remote sensing images from sensors to networks. The document then reviews 18 different research papers on various compression algorithms for remote sensing images, including wavelet-based algorithms, fractal coding methods, and region-based approaches. It evaluates each algorithm's performance in compressing remote sensing images while maintaining quality. The document aims to perform a comparative case study of these different compression algorithms.
RunPool: A Dynamic Pooling Layer for Convolution Neural NetworkPutra Wanda
Deep learning (DL) has achieved a significant performance in computer vision problems, mainly in automatic feature extraction and representation. However, it is not easy to determine the best pooling method in a different case study. For instance, experts can implement the best types of pooling in image processing cases, which might not be optimal for various tasks. Thus, it is
required to keep in line with the philosophy of DL. In dynamic neural network architecture, it is not practically possible to find
a proper pooling technique for the layers. It is the primary reason why various pooling cannot be applied in the dynamic and multidimensional dataset. To deal with the limitations, it needs to construct an optimal pooling method as a better option than max pooling and average pooling. Therefore, we introduce a dynamic pooling layer called RunPool to train the convolutional
neuralnetwork(CNN)architecture.RunPoolpoolingisproposedtoregularizetheneuralnetworkthatreplacesthedeterministic
pooling functions. In the final section, we test the proposed pooling layer to address classification problems with online social network (OSN) dataset
This document discusses using Fourier transforms to measure image similarity. It proposes a metric that uses both the real and complex components of the Fourier transform to compute a similarity ranking between two images. The metric calculates the intersection of the covariance matrix of the Fourier transform magnitude and phase spectra of the two images. The approach is shown to be advantageous for images with varying degrees of lighting. It summarizes existing methods for image comparison and feature extraction, and discusses implementing the proposed similarity metric using OpenCV. Sample results are given comparing the new Fourier-based method to existing histogram comparison techniques.
Survey paper on image compression techniquesIRJET Journal
This document summarizes and compares several popular image compression techniques: wavelet compression, JPEG/DCT compression, vector quantization (VQ), fractal compression, and genetic algorithm compression. It finds that all techniques perform satisfactorily at 0.5 bits per pixel, but for very low bit rates like 0.25 bpp, wavelet compression techniques like EZW perform best in terms of compression ratio and quality. Specifically, EZW and JPEG are more practical than others at low bit rates. The document also notes advantages and disadvantages of each technique and concludes hybrid approaches may achieve even higher compression ratios while maintaining image quality.
INVESTIGATIONS OF THE INFLUENCES OF A CNN’S RECEPTIVE FIELD ON SEGMENTATION O...adeij1
Segmentation of objects with various sizes is relatively less explored in medical imaging, and has been very challenging in computer vision tasks in general. We hypothesize that the receptive field of a deep model corresponds closely to the size of object to be segmented, which could critically influence the segmentation accuracy of objects with varied sizes. In this study, we employed “AmygNet”, a dual-branch fully convolutional neural network (FCNN) with two different sizes of receptive fields, to investigate the effects of receptive field on segmenting four major subnuclei of bilateral amygdalae. The experiment was conducted on 14 subjects, which are all 3-dimensional MRI human brain images. Since the scale of different subnuclear groups are different, by investigating the accuracy of each subnuclear group while using receptive fields of various sizes, we may find which kind of receptive field is suitable for object of which scale respectively. In the given condition, AmygNet with multiple receptive fields presents great potential in segmenting objects of different sizes.
This document presents a method for image upscaling using a fuzzy ARTMAP neural network. It begins with an introduction to image upscaling and interpolation techniques. It then provides background on ARTMAP neural networks and fuzzy logic. The proposed method uses a linear interpolation algorithm trained with an ARTMAP network. Results show the method performs better than nearest neighbor interpolation in terms of peak signal-to-noise ratio, mean squared error, and structural similarity, though not as high as bicubic interpolation. Overall, the fuzzy ARTMAP network provides an effective way to perform image upscaling with fewer artifacts than traditional methods.
CNN FEATURES ARE ALSO GREAT AT UNSUPERVISED CLASSIFICATION cscpconf
This paper aims at providing insight on the transferability of deep CNN features to
unsupervised problems. We study the impact of different pretrained CNN feature extractors on
the problem of image set clustering for object classification as well as fine-grained
classification. We propose a rather straightforward pipeline combining deep-feature extraction
using a CNN pretrained on ImageNet and a classic clustering algorithm to classify sets of
images. This approach is compared to state-of-the-art algorithms in image-clustering and
provides better results. These results strengthen the belief that supervised training of deep CNN
on large datasets, with a large variability of classes, extracts better features than most carefully
designed engineering approaches, even for unsupervised tasks. We also validate our approach
on a robotic application, consisting in sorting and storing objects smartly based on clustering
Dual Tree Complex Wavelet Transform, Probabilistic Neural Network and Fuzzy C...IJAEMSJORNAL
The venture suggests an Adhoc technique of MRI brain image classification and image segmentation tactic. It is a programmed structure for phase classification using learning mechanism and to sense the Brain Tumor through spatial fuzzy clustering methods for bio medical applications. Automated classification and recognition of tumors in diverse MRI images is enthused for the high precision when dealing with human life. Our proposal employs a segmentation technique, Spatial Fuzzy Clustering Algorithm, for segmenting MRI images to diagnose the Brain Tumor in its earlier phase for scrutinizing the anatomical makeup. The Artificial Neural Network (ANN) will be exploited to categorize the pretentious tumor part in the brain. Dual Tree-CWT decomposition scheme is utilized for texture scrutiny of an image. Probabilistic Neural Network (PNN)-Radial Basis Function (RBF) will be engaged to execute an automated Brain Tumor classification. The preprocessing steps were operated in two phases: feature mining by means of classification via PNN-RBF network. The functioning of the classifier was assessed with the training performance and classification accuracies.
A Comparative Case Study on Compression Algorithm for Remote Sensing ImagesDR.P.S.JAGADEESH KUMAR
This document summarizes research on compression algorithms for remote sensing images. It begins with an abstract describing the challenges of transmitting large remote sensing images from sensors to networks. The document then reviews 18 different research papers on various compression algorithms for remote sensing images, including wavelet-based algorithms, fractal coding methods, and region-based approaches. It evaluates each algorithm's performance in compressing remote sensing images while maintaining quality. The document aims to perform a comparative case study of these different compression algorithms.
RunPool: A Dynamic Pooling Layer for Convolution Neural NetworkPutra Wanda
Deep learning (DL) has achieved a significant performance in computer vision problems, mainly in automatic feature extraction and representation. However, it is not easy to determine the best pooling method in a different case study. For instance, experts can implement the best types of pooling in image processing cases, which might not be optimal for various tasks. Thus, it is
required to keep in line with the philosophy of DL. In dynamic neural network architecture, it is not practically possible to find
a proper pooling technique for the layers. It is the primary reason why various pooling cannot be applied in the dynamic and multidimensional dataset. To deal with the limitations, it needs to construct an optimal pooling method as a better option than max pooling and average pooling. Therefore, we introduce a dynamic pooling layer called RunPool to train the convolutional
neuralnetwork(CNN)architecture.RunPoolpoolingisproposedtoregularizetheneuralnetworkthatreplacesthedeterministic
pooling functions. In the final section, we test the proposed pooling layer to address classification problems with online social network (OSN) dataset
This document discusses using Fourier transforms to measure image similarity. It proposes a metric that uses both the real and complex components of the Fourier transform to compute a similarity ranking between two images. The metric calculates the intersection of the covariance matrix of the Fourier transform magnitude and phase spectra of the two images. The approach is shown to be advantageous for images with varying degrees of lighting. It summarizes existing methods for image comparison and feature extraction, and discusses implementing the proposed similarity metric using OpenCV. Sample results are given comparing the new Fourier-based method to existing histogram comparison techniques.
Survey paper on image compression techniquesIRJET Journal
This document summarizes and compares several popular image compression techniques: wavelet compression, JPEG/DCT compression, vector quantization (VQ), fractal compression, and genetic algorithm compression. It finds that all techniques perform satisfactorily at 0.5 bits per pixel, but for very low bit rates like 0.25 bpp, wavelet compression techniques like EZW perform best in terms of compression ratio and quality. Specifically, EZW and JPEG are more practical than others at low bit rates. The document also notes advantages and disadvantages of each technique and concludes hybrid approaches may achieve even higher compression ratios while maintaining image quality.
INVESTIGATIONS OF THE INFLUENCES OF A CNN’S RECEPTIVE FIELD ON SEGMENTATION O...adeij1
Segmentation of objects with various sizes is relatively less explored in medical imaging, and has been very challenging in computer vision tasks in general. We hypothesize that the receptive field of a deep model corresponds closely to the size of object to be segmented, which could critically influence the segmentation accuracy of objects with varied sizes. In this study, we employed “AmygNet”, a dual-branch fully convolutional neural network (FCNN) with two different sizes of receptive fields, to investigate the effects of receptive field on segmenting four major subnuclei of bilateral amygdalae. The experiment was conducted on 14 subjects, which are all 3-dimensional MRI human brain images. Since the scale of different subnuclear groups are different, by investigating the accuracy of each subnuclear group while using receptive fields of various sizes, we may find which kind of receptive field is suitable for object of which scale respectively. In the given condition, AmygNet with multiple receptive fields presents great potential in segmenting objects of different sizes.
This document presents a method for image upscaling using a fuzzy ARTMAP neural network. It begins with an introduction to image upscaling and interpolation techniques. It then provides background on ARTMAP neural networks and fuzzy logic. The proposed method uses a linear interpolation algorithm trained with an ARTMAP network. Results show the method performs better than nearest neighbor interpolation in terms of peak signal-to-noise ratio, mean squared error, and structural similarity, though not as high as bicubic interpolation. Overall, the fuzzy ARTMAP network provides an effective way to perform image upscaling with fewer artifacts than traditional methods.
On Text Realization Image SteganographyCSCJournals
In this paper the steganography strategy is going to be implemented but in a different way from a different scope since the important data will neither be hidden in an image nor transferred through the communication channel inside an image, but on the contrary, a well known image will be used that exists on both sides of the channel and a text message contains important data will be transmitted. With the suitable operations, we can re-mix and re-make the source image. MATLAB7 is the program where the algorithm implemented on it, where the algorithm shows high ability for achieving the task to different type and size of images. Perfect reconstruction was achieved on the receiving side. But the most interesting is that the algorithm that deals with secured image transmission transmits no images at all
IRJET - Symmetric Image Registration based on Intensity and Spatial Informati...IRJET Journal
This document presents a proposed system for symmetric image registration based on intensity and spatial information using a technique called the Coloured Simple Algebraic Algorithm (CSAA). The system first preprocesses color images, extracts features, then classifies images as symmetric or asymmetric using a neural network. It is shown to provide accurate and robust registration of medical and biomedical images. The system is implemented and evaluated on sample images, demonstrating it can successfully identify symmetric versus asymmetric images. The proposed approach aims to improve on existing techniques for intensity-based image registration tasks.
Intel, Intelligent Systems Lab: Syable View Synthesis WhitepaperAlejandro Franceschi
Intel, Intelligent Systems Lab:
Stable View Synthesis Whitepaper
We present Stable View Synthesis (SVS). Given a set
of source images depicting a scene from freely distributed
viewpoints, SVS synthesizes new views of the scene. The
method operates on a geometric scaffold computed via
structure-from-motion and multi-view stereo. Each point
on this 3D scaffold is associated with view rays and corresponding feature vectors that encode the appearance of
this point in the input images.
The core of SVS is view dependent on-surface feature aggregation, in which directional feature vectors at each 3D point are processed to produce a new feature vector for a ray that maps this point into the new target view.
The target view is then rendered by a convolutional network from a tensor of features synthesized in this way for all pixels. The method is composed of differentiable modules and is trained end-to-end. It supports spatially-varying view-dependent importance weighting and feature transformation of source images at each point; spatial and temporal stability due to the smooth dependence of on-surface feature aggregation on the target view; and synthesis of view-dependent effects such as specular reflection.
Experimental results demonstrate that SVS outperforms state-of-the-art view synthesis methods both quantitatively and qualitatively on three diverse realworld datasets, achieving unprecedented levels of realism in free-viewpoint video of challenging large-scale scenes.
This document discusses data hiding techniques for images. It begins by introducing steganography and some common image steganography methods like LSB substitution, blocking, and palette modification. It then reviews related work on minimizing distortion in steganography, modifying matrix encoding for minimal distortion, and designing adaptive steganographic schemes. The document proposes using a universal distortion measure to evaluate embedding changes independently of the domain. It presents a system for reversible data hiding in encrypted images that partitions the image, encrypts it, hides data in the encrypted image, and allows extraction from the decrypted or encrypted image. Least significant bit substitution is discussed as an approach for hiding data in the encrypted image.
PR-065 : High-Resolution Image Synthesis and Semantic Manipulation with Condi...광희 이
This document summarizes research on generating high-resolution, photo-realistic images from semantic label maps using conditional generative adversarial networks (GANs). The goal is to enable interactive visual manipulation of objects by removing, adding, or changing object categories. The proposed method improves upon previous work by using a coarse-to-fine generator with local enhancers, multi-scale discriminators, and learning an instance-level feature embedding to generate diverse images and allow control at the instance level. Experimental results on several datasets demonstrate the method generates higher resolution and more photo-realistic images compared to previous work.
A Novel Multiple-kernel based Fuzzy c-means Algorithm with Spatial Informatio...CSCJournals
Fuzzy c-means (FCM) algorithm has proved its effectiveness for image segmentation. However, still it lacks in getting robustness to noise and outliers, especially in the absence of prior knowledge of the noise. To overcome this problem, a generalized a novel multiple-kernel fuzzy cmeans (FCM) (NMKFCM) methodology with spatial information is introduced as a framework for image-segmentation problem. The algorithm utilizes the spatial neighborhood membership values in the standard kernels are used in the kernel FCM (KFCM) algorithm and modifies the membership weighting of each cluster. The proposed NMKFCM algorithm provides a new flexibility to utilize different pixel information in image-segmentation problem. The proposed algorithm is applied to brain MRI which degraded by Gaussian noise and Salt-Pepper noise. The proposed algorithm performs more robust to noise than other existing image segmentation algorithms from FCM family.
International Journal of Engineering and Science Invention (IJESI)inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
This document discusses clustering of uncertain data objects. It first provides background on clustering uncertain data and challenges involved. It then reviews various existing approaches for clustering uncertain data, including using soft classifiers and probabilistic databases. The document proposes combining k-means clustering with Voronoi diagrams and indexing techniques to improve the performance and efficiency of clustering uncertain datasets. It outlines a plan to integrate k-means with Voronoi diagrams and indexing to reduce execution time and increase clustering performance and results for uncertain data. Finally, it concludes that combining clustering with indexing approaches can better handle uncertain data clustering challenges.
Implementation of Fuzzy Logic for the High-Resolution Remote Sensing Images w...IOSR Journals
This document describes an implementation of fuzzy logic for high-resolution remote sensing image classification with improved accuracy. It discusses using an object-based approach with fuzzy rules to classify urban land covers in a satellite image. The approach involves image segmentation using k-means clustering or ISODATA clustering. Features are then extracted from the image objects and fuzzy logic is applied to classify the objects based on membership functions. The method was tested on different sensor and resolution images in MATLAB and showed improved classification accuracy over other techniques, achieving lower entropy in results. Future work planned includes designing an unsupervised classification model combining k-means clustering and fuzzy-based object orientation.
There exists a plethora of algorithms to perform image segmentation and there are several issues related to
execution time of these algorithms. Image Segmentation is nothing but label relabeling problem under
probability framework. To estimate the label configuration, an iterative optimization scheme is
implemented to alternately carry out the maximum a posteriori (MAP) estimation and the maximum
likelihood (ML) estimations. In this paper this technique is modified in such a way so that it performs
segmentation within stipulated time period. The extensive experiments shows that the results obtained are
comparable with existing algorithms. This algorithm performs faster execution than the existing algorithm
to give automatic segmentation without any human intervention. Its result match image edges very closer to
human perception.
The objective of this paper is to present the hybrid approach for edge detection. Under this technique, edge
detection is performed in two phase. In first phase, Canny Algorithm is applied for image smoothing and in
second phase neural network is to detecting actual edges. Neural network is a wonderful tool for edge
detection. As it is a non-linear network with built-in thresholding capability. Neural Network can be trained
with back propagation technique using few training patterns but the most important and difficult part is to
identify the correct and proper training set.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
DCT based Steganographic Evaluation parameter analysis in Frequency domain by...IOSR Journals
This document analyzes DCT-based steganography using a modified JPEG luminance quantization table to improve evaluation parameters like PSNR, mean square error, and capacity. The authors propose modifying the default 8x8 quantization table by adjusting frequency values in 4 bands to increase image quality for the embedded stego image. Experimental results on test images show that using the modified table improves PSNR, decreases mean square error, and increases maximum embedding capacity compared to the default table. Therefore, the proposed method allows more secret data to be hidden with less distortion and improved image quality.
In this video from the 2015 HPC User Forum in Broomfield, Barry Bolding from Cray presents: HPC + D + A = HPDA?
"The flexible, multi-use Cray Urika-XA extreme analytics platform addresses perhaps the most critical obstacle in data analytics today — limitation. Analytics problems are getting more varied and complex but the available solution technologies have significant constraints. Traditional analytics appliances lock you into a single approach and building a custom solution in-house is so difficult and time consuming that the business value derived from analytics fails to materialize. In contrast, the Urika-XA platform is open, high performing and cost effective, serving a wide range of analytics tools with varying computing demands in a single environment. Pre-integrated with the Hadoop and Spark frameworks, the Urika-XA system combines the benefits of a turnkey analytics appliance with a flexible, open platform that you can modify for future analytics workloads. This single-platform consolidation of workloads reduces your analytics footprint and total cost of ownership."
Learn more: http://www.cray.com/products/analytics/urika-xa
Watch the video presentation: http://wp.me/p3RLEV-3yR
Sign up for our insideBIGDATA Newsletter: http://insidebigdata.com/newsletter
A novel method is proposed for image segmentation based on probabilistic field theory. This model assumes that the whole pixels of an image and some unknown parameters form a field. According to this model, the pixel labels are generated by a compound function of the field. The main novelty of this model is it consider the features of the pixels and the interdependent among the pixels. The parameters are generated by a novel spatially variant mixture model and estimated by expectation-maximization (EM)-
based algorithm. Thus, we simultaneously impose the spatial smoothness on the prior knowledge. Numerical experiments are presented where the proposed method and other mixture model-based methods were tested on synthetic and real world images. These experimental results demonstrate that our algorithm achieves competitive performance compared to other methods.
An Efficient Clustering Method for Aggregation on Data FragmentsIJMER
Clustering is an important step in the process of data analysis with applications to numerous fields. Clustering ensembles, has emerged as a powerful technique for combining different clustering results to obtain a quality cluster. Existing clustering aggregation algorithms are applied directly to large number of data points. The algorithms are inefficient if the number of data points is large. This project defines an efficient approach for clustering aggregation based on data fragments. In fragment-based approach, a data fragment is any subset of the data. To increase the efficiency of the proposed approach, the clustering aggregation can be performed directly on data fragments under comparison measure and normalized mutual information measures for clustering aggregation, enhanced clustering aggregation algorithms are described. To show the minimal computational complexity. (Agglomerative, Furthest, and Local Search); nevertheless, which increases the accuracy.
Region wise processing of an image using multithreading in multi core environIAEME Publication
This document presents a method for region-wise parallel processing of images using multithreading in a multi-core environment. The method divides large images into distinct regions of interest and assigns each region to a separate processing core. Each core then calculates statistical features for its assigned region in parallel. Experimental results on cell images show speedups of up to 600% when using 8 threads on an Intel Xeon processor compared to sequential processing on a single core. The document concludes that region-wise parallel processing provides significantly more efficient results than existing parallel image processing methods. This approach has applications in medical imaging where fast analysis of large images is important.
Region wise processing of an image using multithreading in multi core environIAEME Publication
This document discusses region-wise parallel processing of images using multithreading in a multi-core environment and its applications in medical imaging. It proposes dividing large images into regions of interest and assigning each region to a separate processor core for parallel processing. This approach could provide significantly faster results than existing parallel image processing methods. The document describes calculating statistical features like mean, standard deviation, and variance for each individual region. It presents experimental results showing speedups of around 200% for a core i3 processor and 600% for an Intel Xeon processor compared to sequential processing. The approach and its speed benefits are proposed to have applications in processing large medical images commonly used in areas like CT, PET, and MRI scans.
Development and Comparison of Image Fusion Techniques for CT&MRI ImagesIJERA Editor
Image processing techniques primarily focus upon enhancing the quality of an image or a set ofimages to derive
the maximum information from them. Image Fusion is a technique of producing a superior quality image from a
set of available images. It is the process of combining relevant information from two or more images into a
single image wherein the resulting image will be more informative and complete than any of the input images. A
lot of research is being done in this field encompassing areas of Computer Vision, Automatic object detection,
Image processing, parallel and distributed processing, Robotics and remote sensing. This project paves way to
explain the theoretical and implementation issues of seven image fusion algorithms and the experimental results
of the same. The fusion algorithms would be assessed based on the study and development of some image
quality metrics
COMPARATIVE PERFORMANCE ANALYSIS OF RNSC AND MCL ALGORITHMS ON POWER-LAW DIST...acijjournal
Cluster analysis of graph related problems is an important issue now-a-day. Different types of graph
clustering techniques are appeared in the field but most of them are vulnerable in terms of effectiveness
and fragmentation of output in case of real-world applications in diverse systems. In this paper, we will
provide a comparative behavioural analysis of RNSC (Restricted Neighbourhood Search Clustering) and
MCL (Markov Clustering) algorithms on Power-Law Distribution graphs. RNSC is a graph clustering
technique using stochastic local search. RNSC algorithm tries to achieve optimal cost clustering by
assigning some cost functions to the set of clusterings of a graph. This algorithm was implemented by A.
D. King only for undirected and unweighted random graphs. Another popular graph clustering
algorithm MCL is based on stochastic flow simulation model for weighted graphs. There are plentiful
applications of power-law or scale-free graphs in nature and society. Scale-free topology is stochastic i.e.
nodes are connected in a random manner. Complex network topologies like World Wide Web, the web of
human sexual contacts, or the chemical network of a cell etc., are basically following power-law
distribution to represent different real-life systems. This paper uses real large-scale power-law
distribution graphs to conduct the performance analysis of RNSC behaviour compared with Markov
clustering (MCL) algorithm. Extensive experimental results on several synthetic and real power-law
distribution datasets reveal the effectiveness of our approach to comparative performance measure of
these algorithms on the basis of cost of clustering, cluster size, modularity index of clustering results and
normalized mutual information (NMI).
On Text Realization Image SteganographyCSCJournals
In this paper the steganography strategy is going to be implemented but in a different way from a different scope since the important data will neither be hidden in an image nor transferred through the communication channel inside an image, but on the contrary, a well known image will be used that exists on both sides of the channel and a text message contains important data will be transmitted. With the suitable operations, we can re-mix and re-make the source image. MATLAB7 is the program where the algorithm implemented on it, where the algorithm shows high ability for achieving the task to different type and size of images. Perfect reconstruction was achieved on the receiving side. But the most interesting is that the algorithm that deals with secured image transmission transmits no images at all
IRJET - Symmetric Image Registration based on Intensity and Spatial Informati...IRJET Journal
This document presents a proposed system for symmetric image registration based on intensity and spatial information using a technique called the Coloured Simple Algebraic Algorithm (CSAA). The system first preprocesses color images, extracts features, then classifies images as symmetric or asymmetric using a neural network. It is shown to provide accurate and robust registration of medical and biomedical images. The system is implemented and evaluated on sample images, demonstrating it can successfully identify symmetric versus asymmetric images. The proposed approach aims to improve on existing techniques for intensity-based image registration tasks.
Intel, Intelligent Systems Lab: Syable View Synthesis WhitepaperAlejandro Franceschi
Intel, Intelligent Systems Lab:
Stable View Synthesis Whitepaper
We present Stable View Synthesis (SVS). Given a set
of source images depicting a scene from freely distributed
viewpoints, SVS synthesizes new views of the scene. The
method operates on a geometric scaffold computed via
structure-from-motion and multi-view stereo. Each point
on this 3D scaffold is associated with view rays and corresponding feature vectors that encode the appearance of
this point in the input images.
The core of SVS is view dependent on-surface feature aggregation, in which directional feature vectors at each 3D point are processed to produce a new feature vector for a ray that maps this point into the new target view.
The target view is then rendered by a convolutional network from a tensor of features synthesized in this way for all pixels. The method is composed of differentiable modules and is trained end-to-end. It supports spatially-varying view-dependent importance weighting and feature transformation of source images at each point; spatial and temporal stability due to the smooth dependence of on-surface feature aggregation on the target view; and synthesis of view-dependent effects such as specular reflection.
Experimental results demonstrate that SVS outperforms state-of-the-art view synthesis methods both quantitatively and qualitatively on three diverse realworld datasets, achieving unprecedented levels of realism in free-viewpoint video of challenging large-scale scenes.
This document discusses data hiding techniques for images. It begins by introducing steganography and some common image steganography methods like LSB substitution, blocking, and palette modification. It then reviews related work on minimizing distortion in steganography, modifying matrix encoding for minimal distortion, and designing adaptive steganographic schemes. The document proposes using a universal distortion measure to evaluate embedding changes independently of the domain. It presents a system for reversible data hiding in encrypted images that partitions the image, encrypts it, hides data in the encrypted image, and allows extraction from the decrypted or encrypted image. Least significant bit substitution is discussed as an approach for hiding data in the encrypted image.
PR-065 : High-Resolution Image Synthesis and Semantic Manipulation with Condi...광희 이
This document summarizes research on generating high-resolution, photo-realistic images from semantic label maps using conditional generative adversarial networks (GANs). The goal is to enable interactive visual manipulation of objects by removing, adding, or changing object categories. The proposed method improves upon previous work by using a coarse-to-fine generator with local enhancers, multi-scale discriminators, and learning an instance-level feature embedding to generate diverse images and allow control at the instance level. Experimental results on several datasets demonstrate the method generates higher resolution and more photo-realistic images compared to previous work.
A Novel Multiple-kernel based Fuzzy c-means Algorithm with Spatial Informatio...CSCJournals
Fuzzy c-means (FCM) algorithm has proved its effectiveness for image segmentation. However, still it lacks in getting robustness to noise and outliers, especially in the absence of prior knowledge of the noise. To overcome this problem, a generalized a novel multiple-kernel fuzzy cmeans (FCM) (NMKFCM) methodology with spatial information is introduced as a framework for image-segmentation problem. The algorithm utilizes the spatial neighborhood membership values in the standard kernels are used in the kernel FCM (KFCM) algorithm and modifies the membership weighting of each cluster. The proposed NMKFCM algorithm provides a new flexibility to utilize different pixel information in image-segmentation problem. The proposed algorithm is applied to brain MRI which degraded by Gaussian noise and Salt-Pepper noise. The proposed algorithm performs more robust to noise than other existing image segmentation algorithms from FCM family.
International Journal of Engineering and Science Invention (IJESI)inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
This document discusses clustering of uncertain data objects. It first provides background on clustering uncertain data and challenges involved. It then reviews various existing approaches for clustering uncertain data, including using soft classifiers and probabilistic databases. The document proposes combining k-means clustering with Voronoi diagrams and indexing techniques to improve the performance and efficiency of clustering uncertain datasets. It outlines a plan to integrate k-means with Voronoi diagrams and indexing to reduce execution time and increase clustering performance and results for uncertain data. Finally, it concludes that combining clustering with indexing approaches can better handle uncertain data clustering challenges.
Implementation of Fuzzy Logic for the High-Resolution Remote Sensing Images w...IOSR Journals
This document describes an implementation of fuzzy logic for high-resolution remote sensing image classification with improved accuracy. It discusses using an object-based approach with fuzzy rules to classify urban land covers in a satellite image. The approach involves image segmentation using k-means clustering or ISODATA clustering. Features are then extracted from the image objects and fuzzy logic is applied to classify the objects based on membership functions. The method was tested on different sensor and resolution images in MATLAB and showed improved classification accuracy over other techniques, achieving lower entropy in results. Future work planned includes designing an unsupervised classification model combining k-means clustering and fuzzy-based object orientation.
There exists a plethora of algorithms to perform image segmentation and there are several issues related to
execution time of these algorithms. Image Segmentation is nothing but label relabeling problem under
probability framework. To estimate the label configuration, an iterative optimization scheme is
implemented to alternately carry out the maximum a posteriori (MAP) estimation and the maximum
likelihood (ML) estimations. In this paper this technique is modified in such a way so that it performs
segmentation within stipulated time period. The extensive experiments shows that the results obtained are
comparable with existing algorithms. This algorithm performs faster execution than the existing algorithm
to give automatic segmentation without any human intervention. Its result match image edges very closer to
human perception.
The objective of this paper is to present the hybrid approach for edge detection. Under this technique, edge
detection is performed in two phase. In first phase, Canny Algorithm is applied for image smoothing and in
second phase neural network is to detecting actual edges. Neural network is a wonderful tool for edge
detection. As it is a non-linear network with built-in thresholding capability. Neural Network can be trained
with back propagation technique using few training patterns but the most important and difficult part is to
identify the correct and proper training set.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
DCT based Steganographic Evaluation parameter analysis in Frequency domain by...IOSR Journals
This document analyzes DCT-based steganography using a modified JPEG luminance quantization table to improve evaluation parameters like PSNR, mean square error, and capacity. The authors propose modifying the default 8x8 quantization table by adjusting frequency values in 4 bands to increase image quality for the embedded stego image. Experimental results on test images show that using the modified table improves PSNR, decreases mean square error, and increases maximum embedding capacity compared to the default table. Therefore, the proposed method allows more secret data to be hidden with less distortion and improved image quality.
In this video from the 2015 HPC User Forum in Broomfield, Barry Bolding from Cray presents: HPC + D + A = HPDA?
"The flexible, multi-use Cray Urika-XA extreme analytics platform addresses perhaps the most critical obstacle in data analytics today — limitation. Analytics problems are getting more varied and complex but the available solution technologies have significant constraints. Traditional analytics appliances lock you into a single approach and building a custom solution in-house is so difficult and time consuming that the business value derived from analytics fails to materialize. In contrast, the Urika-XA platform is open, high performing and cost effective, serving a wide range of analytics tools with varying computing demands in a single environment. Pre-integrated with the Hadoop and Spark frameworks, the Urika-XA system combines the benefits of a turnkey analytics appliance with a flexible, open platform that you can modify for future analytics workloads. This single-platform consolidation of workloads reduces your analytics footprint and total cost of ownership."
Learn more: http://www.cray.com/products/analytics/urika-xa
Watch the video presentation: http://wp.me/p3RLEV-3yR
Sign up for our insideBIGDATA Newsletter: http://insidebigdata.com/newsletter
A novel method is proposed for image segmentation based on probabilistic field theory. This model assumes that the whole pixels of an image and some unknown parameters form a field. According to this model, the pixel labels are generated by a compound function of the field. The main novelty of this model is it consider the features of the pixels and the interdependent among the pixels. The parameters are generated by a novel spatially variant mixture model and estimated by expectation-maximization (EM)-
based algorithm. Thus, we simultaneously impose the spatial smoothness on the prior knowledge. Numerical experiments are presented where the proposed method and other mixture model-based methods were tested on synthetic and real world images. These experimental results demonstrate that our algorithm achieves competitive performance compared to other methods.
An Efficient Clustering Method for Aggregation on Data FragmentsIJMER
Clustering is an important step in the process of data analysis with applications to numerous fields. Clustering ensembles, has emerged as a powerful technique for combining different clustering results to obtain a quality cluster. Existing clustering aggregation algorithms are applied directly to large number of data points. The algorithms are inefficient if the number of data points is large. This project defines an efficient approach for clustering aggregation based on data fragments. In fragment-based approach, a data fragment is any subset of the data. To increase the efficiency of the proposed approach, the clustering aggregation can be performed directly on data fragments under comparison measure and normalized mutual information measures for clustering aggregation, enhanced clustering aggregation algorithms are described. To show the minimal computational complexity. (Agglomerative, Furthest, and Local Search); nevertheless, which increases the accuracy.
Region wise processing of an image using multithreading in multi core environIAEME Publication
This document presents a method for region-wise parallel processing of images using multithreading in a multi-core environment. The method divides large images into distinct regions of interest and assigns each region to a separate processing core. Each core then calculates statistical features for its assigned region in parallel. Experimental results on cell images show speedups of up to 600% when using 8 threads on an Intel Xeon processor compared to sequential processing on a single core. The document concludes that region-wise parallel processing provides significantly more efficient results than existing parallel image processing methods. This approach has applications in medical imaging where fast analysis of large images is important.
Region wise processing of an image using multithreading in multi core environIAEME Publication
This document discusses region-wise parallel processing of images using multithreading in a multi-core environment and its applications in medical imaging. It proposes dividing large images into regions of interest and assigning each region to a separate processor core for parallel processing. This approach could provide significantly faster results than existing parallel image processing methods. The document describes calculating statistical features like mean, standard deviation, and variance for each individual region. It presents experimental results showing speedups of around 200% for a core i3 processor and 600% for an Intel Xeon processor compared to sequential processing. The approach and its speed benefits are proposed to have applications in processing large medical images commonly used in areas like CT, PET, and MRI scans.
Development and Comparison of Image Fusion Techniques for CT&MRI ImagesIJERA Editor
Image processing techniques primarily focus upon enhancing the quality of an image or a set ofimages to derive
the maximum information from them. Image Fusion is a technique of producing a superior quality image from a
set of available images. It is the process of combining relevant information from two or more images into a
single image wherein the resulting image will be more informative and complete than any of the input images. A
lot of research is being done in this field encompassing areas of Computer Vision, Automatic object detection,
Image processing, parallel and distributed processing, Robotics and remote sensing. This project paves way to
explain the theoretical and implementation issues of seven image fusion algorithms and the experimental results
of the same. The fusion algorithms would be assessed based on the study and development of some image
quality metrics
COMPARATIVE PERFORMANCE ANALYSIS OF RNSC AND MCL ALGORITHMS ON POWER-LAW DIST...acijjournal
Cluster analysis of graph related problems is an important issue now-a-day. Different types of graph
clustering techniques are appeared in the field but most of them are vulnerable in terms of effectiveness
and fragmentation of output in case of real-world applications in diverse systems. In this paper, we will
provide a comparative behavioural analysis of RNSC (Restricted Neighbourhood Search Clustering) and
MCL (Markov Clustering) algorithms on Power-Law Distribution graphs. RNSC is a graph clustering
technique using stochastic local search. RNSC algorithm tries to achieve optimal cost clustering by
assigning some cost functions to the set of clusterings of a graph. This algorithm was implemented by A.
D. King only for undirected and unweighted random graphs. Another popular graph clustering
algorithm MCL is based on stochastic flow simulation model for weighted graphs. There are plentiful
applications of power-law or scale-free graphs in nature and society. Scale-free topology is stochastic i.e.
nodes are connected in a random manner. Complex network topologies like World Wide Web, the web of
human sexual contacts, or the chemical network of a cell etc., are basically following power-law
distribution to represent different real-life systems. This paper uses real large-scale power-law
distribution graphs to conduct the performance analysis of RNSC behaviour compared with Markov
clustering (MCL) algorithm. Extensive experimental results on several synthetic and real power-law
distribution datasets reveal the effectiveness of our approach to comparative performance measure of
these algorithms on the basis of cost of clustering, cluster size, modularity index of clustering results and
normalized mutual information (NMI).
Influence of local segmentation in the context of digital image processingiaemedu
This document discusses local segmentation in digital image processing. It begins by defining local segmentation as a reasonable approach for low-level image processing that examines existing algorithms and creates new ones. Local segmentation can be applied to important image processing tasks. The document then evaluates using local segmentation for image denoising, finding it highly competitive with state-of-the-art algorithms. Local segmentation attempts to separate signal from noise on a local scale, allowing higher-level algorithms to operate directly on the signal without amplifying noise.
This document discusses using principal component analysis (PCA) and the discrete cosine transform (DCT) to recognize images from a database. It explains how bitmaps store image data, DCT compacts image energy, and PCA reduces dimensionality by finding the principal components via eigenvectors of the covariance matrix. An algorithm is proposed that uses DCT, PCA on a 3x3 block, characteristic equations to find the maximum eigenvector, and least mean square comparison to recognize queries against the database images.
A Review on Matching For Sketch TechniqueIOSR Journals
This document summarizes several techniques for sketch-based image retrieval. It discusses methods using SIFT features, HOG descriptors, color segmentation, and gradient orientation histograms. It also reviews applications of these techniques to domains like facial recognition, graffiti matching, and tattoo identification for law enforcement. The techniques aim to extract visual features from sketches that can be used to match and retrieve similar images from databases. While achieving good results, the methods have limitations regarding database size and specificity, and accuracy with complex textures and shapes. Overall, the review examines advances in using sketches as queries for image retrieval.
A deep learning based stereo matching model for autonomous vehicleIAESIJAI
Autonomous vehicle is one the prominent area of research in computer
vision. In today’s AI world, the concept of autonomous vehicles has become
popular largely to avoid accidents due to negligence of driver. Perceiving the
depth of the surrounding region accurately is a challenging task in
autonomous vehicles. Sensors like light detection and ranging can be used
for depth estimation but these sensors are expensive. Hence stereo matching
is an alternate solution to estimate the depth. The main difficulties observed
in stereo matching is to minimize mismatches in the ill-posed regions, like
occluded, texture less and discontinuous regions. This paper presents an
efficient deep stereo matching technique for estimating disparity map from
stereo images in ill-posed regions. The images from Middlebury stereo data
set are used to assess the efficacy of the model proposed. The experimental
outcome dipicts that the proposed model generates reliable results in the
occluded, texture less and discontinuous regions as compared to the existing
techniques.
SVD BASED LATENT SEMANTIC INDEXING WITH USE OF THE GPU COMPUTATIONSijscmcj
The purpose of this article is to determine the usefulness of the Graphics Processing Unit (GPU) calculations used to implement the Latent Semantic Indexing (LSI) reduction of the TERM-BY DOCUMENT matrix. Considered reduction of the matrix is based on the use of the SVD (Singular Value Decomposition) decomposition. A high computational complexity of the SVD decomposition - O(n3), causes that a reduction of a large indexing structure is a difficult task. In this article there is a comparison of the time complexity and accuracy of the algorithms implemented for two different environments. The first environment is associated with the CPU and MATLAB R2011a. The second environment is related to graphics processors and the CULA library. The calculations were carried out on generally available benchmark matrices, which were combined to achieve the resulting matrix of high size. For both considered environments computations were performed for double and single precision data.
A common goal of the engineering field of signal processing is to reconstruct a signal from a series of sampling measurements. In general, this task is impossible because there is no way to reconstruct a signal during the times
that the signal is not measured. Nevertheless, with prior knowledge or assumptions about the signal, it turns out to
be possible to perfectly reconstruct a signal from a series of measurements. Over time, engineers have improved their understanding of which assumptions are practical and how they can be generalized. An early breakthrough in signal processing was the Nyquist–Shannon sampling theorem. It states that if the signal's highest frequency is less than half of the sampling rate, then the signal can be reconstructed perfectly. The main idea is that with prior knowledge about constraints on the signal’s frequencies, fewer samples are needed to reconstruct the signal. Sparse sampling (also known as, compressive sampling, or compressed sampling) is a signal processing technique for efficiently acquiring and reconstructing a signal, by finding solutions tounder determined linear systems. This is based on the principle that, through optimization, the sparsity of a signal can be exploited to recover it from far fewer samples than required by the Shannon-Nyquist sampling theorem. There are two conditions under which recovery is possible.[1] The first one is sparsity which requires the signal to be sparse in some domain. The second one is incoherence which is applied through the isometric property which is sufficient for sparse signals Possibility
of compressed data acquisition protocols which directly acquire just the important information Sparse sampling (CS) is a fast growing area of research. It neglects the extravagant acquisition process by measuring lesser values to reconstruct the image or signal. Sparse sampling is adopted successfully in various fields of image processing and proved its efficiency. Some of the image processing applications like face recognition, video encoding, Image encryption and reconstruction are presented here.
IRJET- Fusion based Brain Tumor DetectionIRJET Journal
1. The document discusses a method for detecting brain tumors using medical image fusion and support vector machines (SVM).
2. It involves fusing two MRI images using SVM to create a single fused image with more information than the original images. Texture and wavelet features are then extracted from the fused image.
3. The SVM classifier classifies the brain tumors as benign or malignant based on the trained and tested features extracted from the fused image.
1) The document proposes analog signal processing as a solution to reduce computation time for image alignment algorithms that have high computational loads.
2) It modifies the Normalized Cross Correlation (NCC) algorithm for image alignment by only using the diagonal elements of the template and reference image blocks to calculate correlation. This reduces computations compared to using all pixels.
3) A new imaging architecture is proposed that uses an analog processor to implement the modified NCC algorithm in parallel with digital image acquisition, providing faster computation.
This document summarizes an approach for content-based image retrieval using histograms. It discusses representing images as Histogram Attributed Relational Graphs (HARGs) where each node is an image region and edges represent relations between regions. A query is converted to a FARG which is compared to database FARGs using a graph matching algorithm. The system was tested on a database of natural images and performance was quantified using standard measures. It achieved good retrieval results but leaves room for improving retrieval time and reducing semantic gaps between low-level features and human perceptions.
This document proposes a K-means clustering based image compression scheme for wireless imaging sensor networks. It aims to reduce image size and transmission energy usage. The scheme splits the K-means clustering process into a learning phase on a server and a compression phase on embedded devices. In the learning phase, the server determines optimal color centroids. In the compression phase, embedded devices assign pixel colors to the closest centroids, reducing an image from 256 to 16 colors. Evaluations show the scheme compresses images by 50% while maintaining good quality and reducing energy usage by 49% compared to sending raw images.
Comparison Between Levenberg-Marquardt And Scaled Conjugate Gradient Training...CSCJournals
The document compares the Levenberg-Marquardt and Scaled Conjugate Gradient algorithms for training a multilayer perceptron neural network for image compression. It finds that while both algorithms performed comparably in terms of accuracy and speed, the Levenberg-Marquardt algorithm achieved slightly better accuracy as measured by average training accuracy and mean squared error, while the Scaled Conjugate Gradient algorithm was faster as measured by average training iterations. The document compresses a standard test image called Lena using both algorithms and analyzes the results.
This document summarizes a research paper about developing a new set of low-complexity features for detecting steganography in JPEG images. The proposed features, called DCTR features, are computed by taking the discrete cosine transform (DCT) of non-overlapping 8x8 blocks of the image, resulting in 64 feature maps. Histograms are formed from the quantized noise residuals in these feature maps. This approach has lower computational complexity than previous rich models used for steganalysis and provides competitive detection accuracy across different steganographic algorithms while using fewer features. The paper introduces the concept of an undecimated DCT and explains how it relates to previous work in JPEG steganalysis.
Parallel Batch-Dynamic Graphs: Algorithms and Lower BoundsSubhajit Sahu
In this paper we study the problem of dynamically
maintaining graph properties under batches of edge
insertions and deletions in the massively parallel model
of computation. In this setting, the graph is stored
on a number of machines, each having space strongly
sublinear with respect to the number of vertices, that
is, n
for some constant 0 < < 1. Our goal is to
handle batches of updates and queries where the data
for each batch fits onto one machine in constant rounds
of parallel computation, as well as to reduce the total
communication between the machines. This objective
corresponds to the gradual buildup of databases over
time, while the goal of obtaining constant rounds of
communication for problems in the static setting has
been elusive for problems as simple as undirected graph
connectivity.
We give an algorithm for dynamic graph connectivity
in this setting with constant communication rounds and
communication cost almost linear in terms of the batch
size. Our techniques combine a new graph contraction
technique, an independent random sample extractor from
correlated samples, as well as distributed data structures
supporting parallel updates and queries in batches.
We also illustrate the power of dynamic algorithms in
the MPC model by showing that the batched version
of the adaptive connectivity problem is P-complete in
the centralized setting, but sub-linear sized batches can
be handled in a constant number of rounds. Due to
the wide applicability of our approaches, we believe
it represents a practically-motivated workaround to the
current difficulties in designing more efficient massively
parallel static graph algorithms.
Parallel Batch-Dynamic Graphs: Algorithms and Lower BoundsSubhajit Sahu
Highlighted notes on Parallel Batch-Dynamic Graphs: Algorithms and Lower Bounds.
While doing research work under Prof. Kishore Kothapalli.
Laxman Dhulipala, David Durfee, Janardhan Kulkarni, Richard Peng, Saurabh Sawlani, Xiaorui Sun:
Parallel Batch-Dynamic Graphs: Algorithms and Lower Bounds. SODA 2020: 1300-1319
In this paper we study the problem of dynamically maintaining graph properties under batches of edge insertions and deletions in the massively parallel model of computation. In this setting, the graph is stored on a number of machines, each having space strongly sublinear with respect to the number of vertices, that is, n for some constant 0 < < 1. Our goal is to handle batches of updates and queries where the data for each batch fits onto one machine in constant rounds of parallel computation, as well as to reduce the total communication between the machines. This objective corresponds to the gradual buildup of databases over time, while the goal of obtaining constant rounds of communication for problems in the static setting has been elusive for problems as simple as undirected graph connectivity. We give an algorithm for dynamic graph connectivity in this setting with constant communication rounds and communication cost almost linear in terms of the batch size. Our techniques combine a new graph contraction technique, an independent random sample extractor from correlated samples, as well as distributed data structures supporting parallel updates and queries in batches. We also illustrate the power of dynamic algorithms in the MPC model by showing that the batched version of the adaptive connectivity problem is P-complete in the centralized setting, but sub-linear sized batches can be handled in a constant number of rounds. Due to the wide applicability of our approaches, we believe it represents a practically-motivated workaround to the current difficulties in designing more efficient massively parallel static graph algorithms.
Joint, Image-Adaptive Compression and Watermarking by GABased Wavelet Localiz...CSCJournals
Teleradiology using internet can offer patients in remote locations the benefit of diagnosis and advice by a super specialist present in a metropolis. However, exchange of vital information such as the clinical images and textual facts in the public network poses challenges of transmission of large volume of data as well as prevention of the distortion of the images. In this paper, a novel application system to jointly compress and watermark the medical images in a near-lossless, image-adaptive adaptive fashion is proposed to address these challenges. The system design uses genetic algorithm for adaptive wavelet coding to generate compressed data and integration of dual watermarks to realize the security and authentication of the compressed data. The GA-based image adaptive compression provides feasible way to obtain optimal compression ratio without compromising the image fidelity upon subsequent watermarking. A multi-gene approach, with one gene coding for the embedding strength of the robust watermark and the other for the number of bits for embedding the semi-fragile watermark is used for optimal image-adaptive watermarking. A multi-parameter fitness function is designed to address the conflicting requirements of image compression, authenticity and integrity associated with teleradiology. Experimental results show the ability of the system to detect tampering and to limit the peak error between the original and the watermarked images. Moreover, as the watermarking is performed on the compressed image, the overhead for watermarking gets reduced.
A spatial image compression algorithm based on run length encodingjournalBEEI
Image compression is vital for many areas such as communication and storage of data that is rapidly growing nowadays. In this paper, a spatial lossy compression algorithm for gray scale images is presented. It exploits the inter-pixel and the psycho-visual data redundancies in images. The proposed technique finds paths of connected pixels that fluctuate in value within some small threshold. The path is calculated by looking at the 4-neighbors of a pixel then choosing the best one based on two conditions; the first is that the selected pixel must not be included in another path and the second is that the difference between the first pixel in the path and the selected pixel is within the specified threshold value. A path starts with a given pixel and consists of the locations of the subsequently selected pixels. Run-length encoding scheme is applied on paths to harvest the inter-pixel redundancy. After applying the proposed algorithm on several test images, a promising quality vs. compression ratio results have been achieved.
A spatial image compression algorithm based on run length encoding
ClusterPaperDaggett
1. A cluster computer system for the analysis and
classi®cation of massively large biomedical image data
T. Daggett *, I. R. Greenshields
Medical Imaging Laboratory, U-31 Room 19, T. L. Booth Research Center, Department of Computer Science and
Engineering, 233 Glenbrook Road, The University of Connecticut, Storrs, CT, U.S.A.
Received 25 March 1997
Abstract
The current trend in medical image acquisition is towards the generation of image datasets which are
massively large, either because they exhibit ®ne x, y, or z resolution, are volumetric, are multispectral, or
a combination of all of the preceding. Such images pose a signi®cant computational challenge in their
analysis, not only in terms of data throughput, but also in terms of platform costs and simplicity. In this
paper we describe the role of a cluster of workstations together with two quite di€erent application
programming interfaces (APIs) in the quantitative analysis of anatomic image data from the visible
human project using an MRF-Gibbs classi®cation algorithm. We describe the typical architecture of a
cluster computer, two API options and the parallelization of the MRF-Gibbs procedure for the cluster.
Finally, we show speedup results obtained on the cluster and sample classi®cations of visible human
data. # 1998 Elsevier Science Ltd. All rights reserved.
Keywords: Image processing; Cluster computing; Parallel computing; MRF-Gibbs; Classi®cation
1. Introduction
Medical image datasets, like a wide variety of other image datasets, are increasingly
characterizable as massive in terms of their spatial resolution and/or their spectral resolution.
Since the lower bound on processing costs for an n  n image grows quadratically (O(n2
)) with
the on-side dimensions of an image (since most meaningful algorithms have to examine every
pixel at least once), higher resolution images bring nonlinear growth in computational costs
associated with their processing. On the positive side, many imaging algorithms are easily
adapted to the single program multiple data model of parallelism, so that multiple copies of an
Computers in Biology and Medicine 28 (1998) 47±60
0010-4825/98/$19.00 # 1998 Elsevier Science Ltd. All rights reserved.
PII: S0010-4825(97)00032-2
PERGAMON
* Corresponding author.
2. algorithm can execute in parallel acting on di€erent subdivisions of the image. Thus,
discounting typical parallel overheads, many imaging algorithms are in principle amenable to
the p-speedup when distributed over p processors. (Many image algorithms are, of course, not
so amenable.) It is therefore often the case that the issue impacting the processing throughput
of massive images lies not in complex algorithm design, but in platform availability. It will
sometimes be true that a large parallel system's costs can be folded into the cost of the imager
itself without apparent impact on total system cost (as might be the case with a modern MR
imager), but the average biological or medical laboratory interested in analysing these images is
unlikely to wish to acquire a large parallel system, both because of cost and system
management problems. However, even the most modest laboratories will have access to
inexpensive PC-level workstations, and it has now become a trend in parallel system design to
exploit groups of these systems networked together to form what is known as a cluster
computer. Cluster computers, unlike the more expensive symmetric multiprocessors which
characterize the bulk of non-cluster parallel systems are inexpensive to assemble, easy to
maintain, extremely fault-tolerant (in the sense that the failure of a cluster component Ða
workstationÐ simply means that the cluster has less processing power than it had before),
surprisingly powerful (a tribute to the modem microprocessor) and (with newer application
programming interfaces) relatively easy to program. Clusters can be quite small yet still
powerful; the architecture we describe below consists of 8 Pentium systems capable of a
(theoretical) throughput SpecFP of about 245 [1].
The principal software component for application development on parallel systems is the
underlying communication mechanism used for inter-process communications. There are a
number of di€erent software communication paradigms that have received signi®cant attention.
Of these, the message passing interface (MPI) has been much heralded as the future standard
for message passing based communications [2, 3]. A very di€erent and less well known
technique for inter-process communications is virtual shared memory. Virtual shared memory
allows processes to communicate by directly sharing data as if it existed in a global shared
memory space. Processes can access (read or write) information in the memory space without
concern or knowledge of external processes, and can therefore be developed in a more
sequential program fashion [4]. This conceptually simpler view supports the main advantage
usually associated with virtual shared memory, which is that the application programming
interface is usually quite simple and therefore the complexity of developing parallel
applications is greatly reduced. Paradise (Scienti®c Computing, New Haven, CT) is a widely
used virtual shared memory based communications package that o€ers a very simple API.
The particular problem we address involves the derivation of quantitative anatomical
parameters (such as tissue volume by type etc.) from the visible human dataset [5]. Our medical
imaging goal is to segment (by automatic image classi®cation) the anatomical images of the
visible human dataset so that quantitative anatomical issues and computational geometric
models can be constructed from the dataset. In reference to earlier work from MR images [6, 7]
we have elected to use the visible human dataset to extract out and model the bladder and
urethra. Both because it is anatomically simpler and because the dataset is homogeneous in x,
y, and z resolution (0.33 mm per pixel), we are working with the newer visible female dataset.
Even in the small anatomical region of interest we are examining, the dataset is huge: each raw
image (2048 Â 1216 Â 3) exceeds 7 MB. The pelvic imagery alone (from the base of the sacrum
T. Daggett, I.R. Greenshields / Computers in Biology and Medicine 28 (1998) 47±6048
3. to about the linea terminalis on the sacral promontory) occupies about 4200 MB. The
algorithmic strategy involves image preprocessing (to remove background and non-anatomical
image details) followed by an unsupervised context-dependent classi®cation of the RGB images
using a Gibbs classi®er (described below).
Automatic image classi®cation, though dicult, is not a new problem and numerous
algorithms have been developed for other problem domains which may aid in the classi®cation
process [8]. These algorithms can be grouped into two principal categories: context-dependent
and context-independent. Context-independent algorithms perform an image classi®cation
based directly on a pixel's intensity and the distribution of the intensity values of an image. In
simpler terms they attempt to classify the pixel purely from its intensity with respect to the rest
of the pixel intensi®es of the image. Context-independent algorithms classify the pixel values of
a random image (one where pixels are randomly scattered) identically to the pixels of an image
that contains well de®ned regions (typical digital image), as long as the images have identical
intensity distributions. Well known examples of context-independent algorithms include the
nearest mean, maximum likelihood estimation and K nearest neighbors methods [8]. In
comparison to context-independent algorithms, context-dependent approaches attempt to
utilize relationships, or contextual information, between pixels within an image to perform a
classi®cation. An important and popular example from the context-dependent category is the
MRF-Gibbs algorithm [9, 10].
The MRF-Gibbs classi®cation method utilizes contextual information derived from the
relationship that a point (pixel) in an image has with the neighborhood (spatially close pixels)
that encompasses it. This relationship is assumed to exhibit the characteristics of a Markov
random ®eld (MRF), in which the neighboring pixels can be e€ectively used to classify the
pixel. The Gibbs component of the MRF-Gibbs classi®cation method stems from the
evaluation of this relationship in terms of an energy function. This energy function can be used
to produce the ideal image classi®cation through the maximization of the energy function at
each pixel of the image. However, the determination of the ideal classi®cation requires that
every possible combination of pixel classi®cations of an image be examined. Therefore an n  n
image with L possible classes contained within would require Ln  n
combinations to be
evaluated. This evaluation is considered computationally intractable and hence is considered
not plausible for practical sized images. To address this processing limitation, the
determination of the ideal classi®cation can be approximated through the employment of
stochastic maximization techniques, such as simulated annealing, to reduce the computational
burden of the MRF-Gibbs algorithm. Theoretically, the MRF-Gibbs classi®er iterates
hundreds (or even thousands) of times over each image, repeatedly computing energy functions
and probability distributions, so that the computational demands are enormous.
Faced with this challenge, we elected to construct a cluster computer comprising 8 PC
platforms (Pentium 166 MHz systems, 32 MB running Microsoft Windows NT). The machines
are connected both by a 25 Mbs ATM switch and 10 Mbs Ethernet, and (as we describe
below) support two distinct application programming interfaces (MPI and Paradise) for the
implementations of parallel algorithms. In the sections that follow we describe the
parallelization of the MRF-Gibbs algorithm in conjunction with simulated annealing, by ®rst
identifying the basic operational concepts of the algorithm and then de®ning how it was
T. Daggett, I.R. Greenshields / Computers in Biology and Medicine 28 (1998) 47±60 49
4. parallelized on the cluster. The speedup of the parallelized algorithm using both MPI and
Paradise based communications over a serial version is presented.
2. MRF-Gibbs
In the core of the MRF-Gibbs algorithm lies the Markov random ®eld (MRF) assumption.
The MRF assumption states that the true interpretation of any pixel Xij given the true
interpretation of all image pixels G depends only on the interpretation of its neighboring pixels
in a neighborhood Nij (Eq. (1)).
P…Xij ˆ okjG † ˆ P…Xij ˆ okjNij † …1†
This interpretation of a pixel can be used for the assignment, or classi®cation of a pixel to a
class ok, which represents a labeling of the pixel from a given set of L possible classes. These
classes may correspond to tissue types believed to be contained in the image (muscle tissue,
bone, etc.).
A neighborhood Nij consists of those pixels that fall into a de®ned region surrounding the
pixel at location (ij). Various shaped neighborhoods can be used, and their shapes (sizes) are
usually determined from the underlying characteristics present in the regions, or structures of
an image. A very commonly employed and simple neighborhood system may be de®ned as
consisting of those pixels which are within a particular Euclidean distance from the pixel Xij.
Typically the relationship between neighboring points diminishes greatly with increasing
distance. It has been shown [9, 10] that a meaningful neighborhood may in fact be quite small,
and can be de®ned as consisting of those neighbors, which are a Euclidean distance (d) of one
from the pixel in question. Thus
Nij ˆ f…r,s†:d……r,s†,…ij†† ˆ 1g …2†
de®nes a four-element neighborhood, consisting of the east, west, north and south adjacent
pixels about the site (ij), and it is this simple but e€ective structure that we use.
With this concept of a four-pixel neighborhood, the assignment of pixel Xij to a class ok can
be evaluated in terms of the posterior probability P(okvXij,Q), the conditional probability that
the assignment of class ok is correct given observation Xij and prior information Q. The prior
information Q consists of the pixel's neighborhood Nij, global information such as the class
probability of the various classes occurring naturally in the image P(ok), and local information
de®ning the likelihood of the contents of the neighborhood. In simple terms, the probability of
class ok, being assigned to site (ij) depends only upon the observations contained in its
neighborhood Nij, Eq. (3).
P…okjXij,Q† ˆ
‰
…xy†PNij
P…okjXxy,Q† …3†
In Eq. (3), Xxy represents an observation contained in the neighborhood Nij. From a
Bayesian perspective, the best classi®cation for Xij is the class ok that maximizes Eq. (4).
T. Daggett, I.R. Greenshields / Computers in Biology and Medicine 28 (1998) 47±6050
5. ‰
…xy†PNij
P…ok†P…Xxyjok†r
‰
…xy†PNij
P…oh†P…Xxyjoh†, Vh Tˆ k …4†
The decision rule, de®ned above, involves maximizing the probability of the classi®cation of
an entire neighborhood. The MRF-Gibbs equivalence shows that Eq. (4) can be written in a
Gibbsian form P(Xij,ok),
P…Xij,ok† ˆ
1
Z
eÀU…Xij,ok†aT
…5†
U…Xij,ok† ˆ À
ˆ
…x,y†PNij
log P…Xxyjok† ‡ U…ok† …6†
and that this Gibbsian equivalent can be evaluated using an energy function, U(Xij,ok) used
here, a normalizing constant Z, and an arti®cial temperature T. Proof of the equivalence
between the Gibbs and MRF representations is given in Refs. [9, 10]. The MRF-Gibbs
classi®cation process therefore consists of assigning the pixels of an image those class values
that produce a maximum posterior distribution when the energy is minimized. The description
above has concentrated on the local expression of the MRF-Gibbs model for classi®cation.
Evidently, energy minimization cannot proceed on a purely local basis; instead, we consider the
maximum a posteriori (MAP) estimate of the image's class structure as follows. Suppose there
are L classes in an n  n image. A con®guration z $ C is an enumeration of the classes
attached to each point of the image. In the case described, cardinality card{C} = Ln  n
. Let W
be the (random) observation of a con®guration, and let X be the (random) observation of the
image data; then the MAP estimate is the estimate [8, 9].
WMAP ˆ arg maxzPCfP…W ˆ zjX ˆ x†g …7†
However, the assignment of the pixels cannot be performed pixel by pixel but instead
represents a dynamic programming problem.
One can employ simulated annealing to migrate towards an approximate solution to this
maximization problem. Simulated annealing is an iterative optimization technique which
attempts to minimize an energy function through random excitation [11]. It can be utilized to
reduce the overall computation of locating an ``ideal'' classi®cation to a more reasonable
amount. Here ``ideal'' now refers to an approximate solution to the classi®cation problem,
which may or may not be the true classi®cation solution. Typically, the procedure terminates
after a ®xed number of iterations of the annealing process.
In the MRF-Gibbs classi®cation approach, simulated annealing functions by periodically re-
classifying a pixel to a ``worst'' classi®cation, i.e. a classi®cation that actually produces a higher
energy potential. It is this occasional acceptance of a ``worst'' classi®cation that allows the
algorithm to jump out of a local minimum in search of a global minimum, consequently
producing an improved classi®ed image. The decision rule for the acceptance of a ``worst''
classi®cation is governed by a temperature and a cooling schedule.
Numerous cooling schedules have been developed for simulated annealing [11], however it
appears that the best schedule is usually determined through ad hoc trial and error. In our
T. Daggett, I.R. Greenshields / Computers in Biology and Medicine 28 (1998) 47±60 51
6. case, a cooling schedule T(t), (Eq. (8))
T…t† ˆ
T0
log…t ‡ 1†
…8†
where T0 is the initial temperature is employed [12]. The initial temperature T0 was determined
from the ®rst iteration of the algorithm, which evaluated the image by summing the total
energy encountered (increase in energy) during an attempted classi®cation of the pixels to a
new classi®cation with higher energy. The gradual cooling of the temperature produced by the
schedule ideally allows the algorithm to settle in the global minimum.
With the employment of simulated annealing to ®nd the minimal energy produced by Eq. (6),
the remaining complexity of the MRF-Gibbs approach involves determining a useful energy
function. The derivation of an energy function typically involves understanding the nature of
the image, identifying a meaningful neighborhood shape (and size), and describing the unique
attributes of the image regions, or structures. Numerous models have been developed to
describe these di€ering attributes in terms of relationships of the pixels of the neighborhoods,
contained in the regions or structures. As result, many di€erent energy functions have been
developed which attempt to evaluate the likelihood or quality of a neighborhood relative to a
set of attribute quali®ers [10]. For our energy function,
U…Xij,ok† ˆ À
ˆ
…x,y†PNij
log P…Xxyjok† ‡
ˆ
…x,y†PNij
log P…o…x,y††
‡
ˆ
…x,y†PNij
log P…x,y†,…x‡1,y†…o…x,y†,o…x‡1,y†† ‡
ˆ
…x,y†PNij
log P…x,y†,…x,y‡1†…o…x,y†,o…x,y‡1†† …9†
we utilized the conditional probability of each neighbor in Nij given the class ok, the
probability of the classi®cations of neighbors, and the transitional probabilities associated with
the spatial con®guration of the neighborhood [13]. This spatial likelihood encompasses both
the horizontal and vertical transitions of the neighborhood. Since our neighborhood consisted
of the four-pixel stencil mentioned earlier, the horizontal relationship consisted of evaluating
the transitional probability of the west pixel to pixel Xij and Xij to the east pixel. Likewise the
vertical relationship consisted of the vertical transitional probability of the north pixel to Xij
and Xij to the south pixel. It should be noted that a transitional probability of ok to oj is the
likelihood, or probability, that ok, is followed by oj in the direction of the transition type.
Simulated annealing is employed in conjunction with Eq. (9), as de®ned by the following
series of steps.
1. Randomly select a new class for the pixel xij.
2. Evaluate the new classi®cation in terms of the neighborhood relationships and the
likelihood of the neighborhood via the energy function (Eq. (9)), DU.
3. If the re-classi®cation produces a lower energy then the current classi®cation then accept it.
4. Else accept it conditionally based upon an annealing schedule derived probability Ai>r.
The probability Ai,
Ai ˆ eÀDUaT…t†
…10†
T. Daggett, I.R. Greenshields / Computers in Biology and Medicine 28 (1998) 47±6052
7. is determined based upon the magnitude of the increase in energy DU from the current pixel
classi®cation to the new one, and the annealing schedule T(t). The acceptance guide r is de®ned
as a uniform random variable between 0 and 1.
The series of steps de®ned in the re-classi®cation rule were applied to each pixel of the image
in succession. Upon the completion, the temperature controlling the annealing schedule was
updated and the next iteration of the algorithm was begun. The number of iterations required
to produce the ``ideal'' classi®cation is not predictable. A classi®cation resulting from a
context-independent classi®cation approach (K nearest neighbors) was used to estimate the
transitional probabilities, the probabilities of the classes, the class statistics (means and
covariances) and as an initial classi®cation of the image.
3. Implementation
3.1. Parallel design description
The parallelization of the MRF-Gibbs algorithm was approached through domain
decomposition in which both data and computation was partitioned. This led to the principal
employment of a single program multiple data (SPMD) design, in which n child processes
(n = 1 to 7) cooperatively work on di€erent portions of an image (W wide by H high) under
the direction of a single parent process. The principal responsibilities of the parent process are
the partitioning and distributing of the image data amongst the children, performing collective
calculations, and communicating important global classi®cation information. Child processes
are tasked with performing the required image classi®cation operations on their image portion.
Information used by the child processes, obtained from the parent process, consists primarily
of image data and class related information, while information obtained from neighboring
child processes consists of the conditional probabilities of the image partition edges and the
classi®ed values associated with the same edges. This edge information is required in order to
generate the maximum number of complete neighborhoods utilized in the MRF-Gibbs
classi®cation process. Evidently, the outcomes of the serial and parallel algorithms are
identical.
Each child process is given an approximately equal size portion of the image to work on.
This image section consists of a rectangular segment of the image, which was Wan in width
(the last process receives extra columns when required by the magnitude of n) with a height of
H. Therefore the partitioning of the data is basically performed by mapping the columns of the
image to particular child processes. Image data (both raw image data and initial classi®ed data)
is sent by the parent to the children in row by row fashion. Hence, each image message
consisted of a Wan image pixel values with a total of 2nH messages sent from the parent to
the children. In addition, a total of nH number of messages is sent from the children to the
parent at the end of a classi®cation, to return the classi®ed portions of the image to the parent
for re-uni®cation.
Each process, child and parent, were mapped to a separate processor of the cluster. This was
chosen in attempt to both maximize processor utilization and load balance the overall system.
In addition, the principal communications required between child processes consists of the
T. Daggett, I.R. Greenshields / Computers in Biology and Medicine 28 (1998) 47±60 53
8. exchanging of classi®ed edge data at the beginning of each iteration. During a given MRF-
Gibbs iteration, each child can operate simply on their image portion without interference by
neighboring processes. Both the parent and child processes exhibit synchronous
communications in that parent and children will wait for expected information to arrive.
The MRF-Gibbs parallel application was designed and implemented with respect to
scalability, future software modi®cation, and software reuse. Both the child and parent
processes were created to function without concern for either the number of processors or
processes employed. A high-level communications interface (C + + class) was designed and
implemented to abstract the actual algorithm above the communications mechanism utilized,
therefore supporting the rapid employment of other forms of underlying inter-process
communications. This class also provided overloaded operations, which abstracted the
development of the algorithm above the complexities of the communication of various data
types, thus further simplifying code development.
3.2. Image dataset
The MRF-Gibbs algorithm was customized to improve its performance due to the images
used and the parallel environment employed. Sample images from the visible human project
female dataset were used for all classi®cations. The raw images have dimensions of
2048 Â 1216 Â 3, but are reduced by background elimination to 1300 Â 700 Â 3.
3.3. Algorithm realization
The implementation of the MRF-Gibbs algorithm was approached through a series of
phases. The phases were data loading, transitional probability determination, class conditional
probability construction, and MRF-Gibbs re-classi®cation iteration.
The ®rst phase, data loading, consists of loading in the raw image data, the context-
independent classi®ed version of the image, and the statistics of the classes determined from
the context-independent approach. The context-independent classi®ed image contained a single
classi®ed value (0 to L À 1 classes) for each pixel. The class statistics consists of a class mean g,
covariance matrix Sk and probability Pk for each class.
The next phase consists of determining the transitional probabilities of the classes de®ned in
the context-independent classi®cation of the image.
The class conditional probability construction phase followed the transitional probability
phase. This phase consists of determining the class conditional probability of each class for
each pixel of the image. The class conditional probability was modeled as a maximum
likelihood estimation posterior probability, which is the posterior probability of a class ok
given a pixel value X. Eq. (11) de®nes the class conditional probability P(okvXij) of class ok, of
L total classes based upon the probability of Xij given class ok, which is identi®ed as the
conditional probability p(Xijvok), and the overall probability of class ok, P(ok).
T. Daggett, I.R. Greenshields / Computers in Biology and Medicine 28 (1998) 47±6054
9. P…okjXij † ˆ
P…ok†P…Xijjok†
ˆL
zˆ1
P…oz†P…Xijjoz†
…11†
In Eq. (11), the conditional probability p(Xijvok) is modeled as a normal distribution using
the class mean and covariance matrix de®ned in the class statistics. Overall this phase consists
of calculating and storing W Â H Â L conditional probabilities, where W, H, L refers to the
image width, image height and number of classes of the non-contextually classi®ed image,
respectively.
The ®nal phase, MRF-Gibbs iteration, involves the actual re-classifying of the pixels through
a series of re-classi®cation iterations. During the ®rst iteration the initial temperature, T0 is set
based upon the total energy encountered during the attempted re-classi®cation of all pixels.
During this initial iteration no pixels are re-classi®ed which produced a ``worst'' classi®cation,
but instead the increase in energy encountered during an attempted re-classi®cation of a pixel
is accumulated, and used to set T0. During the second and subsequent iterations this phase
attempts to re-classify each pixel to a random class value. The energy of each new classi®ed
pixel value is evaluated and accepted based upon the guidelines de®ned in the re-classi®cation
rules for MRF-Gibbs. This phase iterates through the MRF-Gibbs re-classi®cation algorithm
for a ®xed number of times. At the end of each iteration, the temperature is adjusted in
accordance with the annealing schedule de®ned earlier.
Overall the implementation of the MRF-Gibbs algorithm can be examined as a series of
steps. These steps can be separated and listed as the following.
1. Load in the raw image, the class statistics, and a previously classi®ed image.
2. Determine the transitional probabilities from the previously classi®ed image.
3. Determine the class conditional probability for each class at each pixel.
4. Evaluate the raw image and set the initial temperature.
5. For each pixel in the image choose a new class value at random, and evaluate the change in
energy resulting from the energy function (using class probabilities, conditional
probabilities, and transitional probabilities). Accept the new pixel classi®cation if the
resulting energy is lower, else accept it conditionally base upon an annealing probability.
6. Once all the pixels have been attempted to be re-classi®ed, then lower the temperature based
upon the annealing schedule.
7. Go to step 5 until the desired number of iterations has been reached.
In a typical image classi®cation the majority of the execution time of the program is spent in
either the conditional probability construction phase (phase 3) or in the re-classi®cation phase
(phase 5). The other phases do not constitute a signi®cant amount of computation. Fig. 1
represents a graphical view of the parallel version of the MRF-Gibbs algorithm.
Steps are 1, 8, and 10 of the MRG-Gibbs parallel implementation are considered to be
principally communication steps. Steps 1 and 10 occur only once during an image
classi®cation, while the classi®ed edge exchanging step (step 8) will occur during each iteration
of the MRF-Gibbs algorithm.
T. Daggett, I.R. Greenshields / Computers in Biology and Medicine 28 (1998) 47±60 55
10. Fig. 1. Depiction of parallel version of the MRF-Gibbs algorithm.
T. Daggett, I.R. Greenshields / Computers in Biology and Medicine 28 (1998) 47±6056
11. Hence, step 8 is perhaps the true communications cost of the parallelized version of the
algorithm, since it is the only communications that results from various numbers of iterations.
The other steps (other then 1, 8, and 10) which require communications represent relatively
insigni®cant communication amounts and also only occur once per image classi®cation. In
Figs. 2 and 3, Ixy represents the image partition and Cxy represents the classi®ed partition of
the same image segment.
The computationally most intense step is the determination of the class conditional
probabilities (step 5), though this step is only required to be performed once per image
classi®cation. In step 5 the class conditionals are computed once and stored for re-use during
the MRF-Gibbs classi®cation phase (steps 8±9).
Fig. 2. Original image after background removal.
Fig. 3. Classi®ed image produced by MRF-Gibbs algorithm.
T. Daggett, I.R. Greenshields / Computers in Biology and Medicine 28 (1998) 47±60 57
12. Once the program entered the MRF-Gibbs classi®cation phase (steps 8±9), most of time is
spent in the evaluation of random class values at each pixel in the image (step 9). This step
requires a fairly large amount of computation due primarily to the evaluation of the energy
function using both the current pixel classi®cation value and the randomly selected value. Note
that the current energy at pixel Xij would not be accurate if the classi®cation of pixel Xij + 1
changes during the attempted re-classi®cation of pixel Xij + 1, hence the energy potentials of
the neighborhoods change continuously as neighbors, are subsequently re-classi®ed.
At the end of the required number of iterations each child process returns the classi®ed
partitions of the image to the parent (step 10), who combines the image sections into a ®nal
classi®ed version of the original image. This signals the completion of the classi®cation process.
4. Performance results
Table 1 shows the performance of the algorithm for varying number of child processes in
conjunction with a single parent process. The number of iterations was ®xed at 200 with 6
classes for all trials. Both MPI and Paradise, as indicated in the table, were utilized for all
inter-process communications. The information exchanged consisted of the class statistics, the
raw image data and classi®ed partitions of the images, along with the class conditional and
classi®ed image edge values. The table lists speedup relative to a single sequential process and
the timeline employed started with the distribution of the image data and ended with the
returning to and combining of all classi®ed image sections by the parent.
5. Experimental framework
All of the parallelization experiments were performed on the University of Connecticut CoW
(Cluster of Workstations) facility located inside the Image Processing Laboratory (IPL), Booth
Research Center. The machines contained in the cluster were identical and consisted of 32 MB
Table 1
Speedupa
of MRF-Gibbs resulting from using MPI and Paradise
Number of Child Processes Speedup with MPI Speedup with Paradise
1 0.14 0.21
2 1.32 1.86
3 2.39 2.79
4 2.37 3.67
5 2.43 4.47
6 2.65 5.15
7 2.56 6.02
a
Speedup generated from estimated single process performance.
T. Daggett, I.R. Greenshields / Computers in Biology and Medicine 28 (1998) 47±6058
13. 166 MHz Intel Pentium based personal computers executing the Windows NT 3.5.1 operating
system, connected together via an IBM ATM 25.5 MBps Switch. The ATM switch was
con®gured to provide LAN Emulation support. All software was written in C + + using the
Visual C + + software development environment. Version 0.9 of MPI was used.
6. Conclusion
Automatic image classi®cation is an important and computationally challenging task. We
have taken the MRF-Gibbs algorithm and have parallelized it using both message passing and
virtual shared memory based communications. We demonstrated that the algorithm can be
parallelized and implemented on a common cluster using standard inter-process
communications. Our results show the following.
1. Classi®cation algorithms can be e€ectively parallelized by partitioning the image between
cooperating child processes and then reconciling the local attributes of the image partitions.
This coarse grain partitioning approach could perhaps be employed for other classi®cation
algorithms, either context-dependent or context-independent.
2. Both MPI and Paradise generate an overhead but can still be e€ectively used if the
algorithm computation burden is large enough to compensate for additional
communications cost. For relatively computational simple algorithms, such as nearest mean,
communications costs may greatly limit the speedup experienced.
3. A small cluster can be con®gured and utilized for the classi®cation process. However, the
real-time acquisition and display requirements of future medical image visualization systems
are currently far greater then can be realistically supported with current small scale systems.
4. In general, Paradise far out performed MPI for the MRF-Gibbs application developed
under the hardware and software suite utilized.
In summary, the parallelization of the MRF-Gibbs algorithm demonstrated that statistically
based classi®cation algorithms can be parallelized using a coarse grain approach based upon
partitioning the image data. Partitioning the data does require that statistics associated with
the image partitions be reconciled. However, the amount of data required to be communicated
is typically quite small and can be performed using standard communication mechanisms. It
should be noted that both MPI and Paradise do generate a processing overhead. The impact of
these overheads could potentially be reduced by increasing the processing capacity of the
cluster, or perhaps by employing a lower level communication level then TCP/IP for MPI or
Paradise. In addition, both MPI and Paradise provide an e€ective application programming
interface for the classi®cation problem domain.
Acknowledgements
This work was funded by State of Connecticut under its Critical Technologies Program. We
also wish to gratefully acknowledge Scienti®c Computing Associates, New Haven, CT for the
T. Daggett, I.R. Greenshields / Computers in Biology and Medicine 28 (1998) 47±60 59
14. delineation of the Paradise Software for the duration of the project and Dr Michael J.
Ackerman, Visible Human Project, National Library of Medicine for providing the Visible
Human Female dataset.
References
[1] SPECfp_rate95, Spec Benchmarks, http://open.specbench.org/osg/cpu95/results/rfp95.html.
[2] Snir, M., Otto, S., Huss-Lederman, S., Walker, D. and Dongarra, J., MPI The Complete Reference. MIT Press,
Cambridge, MA, 1996.
[3] Gropp, W., Lusk, E. and Skjellum, A., Using MPI Portable Parallel Programming with the Message-Passing
Interface. MIT Press, Cambridge, MA, 1996.
[4] Amza, C. and Cox, A., TreadMarks: shared memory computing on networks of workstations, IEEE Computer,
February, 1996, 18±28.
[5] Visible Human Dataset, National Library of Medicine, National Institutes of Health, Bethesda, MD.
[6] Greenshields, I. R. and Chun, J., Simulation of ¯uid ¯ow through biological inhomogeneous elastic tubes,
Proc. of IASTED Int. Conference on Modeling and Simulation, Pittsburgh, PA, 1993, pp. 66±68.
[7] Greenshields, I. R., Peters, T. J. and Chun, J., Parallel strategies in the reconstruction of surfaces from contour
data, Proc. of 4th ISMM Conf. on Parallel and Distributed Computing Systems, 1991, pp. 355±358.
[8] Fukunaga, K., Statistical Pattern Recognition, 2nd edition. Academic Press, San Diego, CA, 1990.
[9] Li, S. Z., Markov Random Field Modeling in Computer Vision. Springer, Berlin, 1995.
[10] Winkler, G., Image Analysis, Random Fields and Dynamic Monte Carlo Methods. Springer, Berlin, 1991.
[11] L. Ingber, Simulated annealing: practice versus theory, Mathematical and Computer Modeling 18 (11) (1993)
29±57.
[12] Geman, S. and Gemen, D., Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images,
IEEE Trans. on Pattern Analysis and Machine Intelligence, 1984, PAMI-8(6), 721±741.
[13] Zhu, J. and Greenshields, I., Classi®cation of multiecho magnetic resonance images of brain using MRG-Gibbs
classi®er, Department of Computer Science and Engineering, University of Connecticut, Storrs, CT, 1993.
Thomas A. Daggett is a Ph.D. candidate at the University of Connecticut. He received a BSE in computer
engineering from the University of Connecticut in 1985 and his MS in computer science from Rennselaer
Polytechnic Institute in 1989. He is currently serving as an operability and display engineer in the Advanced Display
Research Facility (ADRF) at the Naval Undersea Warfare Center (NUWC) division Newport, Newport Rhode
Island. His research interests lie in parallel/distributed processing, image processing, small-scale cluster computing,
and advanced display technologies.
Ian Greenshields is an Associate Professor of Computer Science and Engineering at the University of Connecticut.
His research interests lie in the areas of biomedical computing, biomedical image processing and biodynamical
modeling.
T. Daggett, I.R. Greenshields / Computers in Biology and Medicine 28 (1998) 47±6060