Like this document? Why not share!

- Final Year IEEE Project 2013-2014 ... by elysiumtechnologies 31626 views
- Final Year IEEE Project 2013-2014 -... by elysiumtechnologies 46355 views
- imageprocessing-abstract by Jagadeesh Kumar 9179 views
- BULK IEEE 2014-15 PROJECTS LIST FO... by Shane Saro 3215 views
- ieee projects 2014-15 for cse with ... by vsanthosh05 54071 views
- IEEE IMAGE PROCESSING 2014-2015 Tit... by vsanthosh05 2357 views

13,924

-1

-1

Published on

DOMAINS WE ASSIST

HARDWARE:

Embedded, Robotics, Quadcopter (Flying Robot), Biomedical, Biometric, Automotive, VLSI, Wireless (GSM,GPS, GPRS, RFID, Bluetooth, Zigbee), Embedded Android.

SOFTWARE

Cloud Computing, Mobile Computing, Wireless Sensor Network, Network Security, Networking, Wireless Network, Data Mining, Web mining, Data Engineering, Cyber Crime, Android for application development.

SIMULATION:

Image Processing, Power Electronics, Power Systems, Communication, Biomedical, Geo Science & Remote Sensing, Digital Signal processing, Vanets, Wireless Sensor network, Mobile ad-hoc networks

TECHNOLOGIES WE WORK:

Embedded (8051, PIC, ARM7, ARM9, Embd C), VLSI (Verilog, VHDL, Xilinx), Embedded Android

JAVA / J2EE, XML, PHP, SOA, Dotnet, Java Android.

Matlab and NS2

TRAINING METHODOLOGY

1. Train you on the technology as per the project requirement

2. IEEE paper explanation, Flow of the project, System Design.

3. Algorithm implementation & Explanation.

4. Project Execution & Demo.

5. Provide Documentation & Presentation of the project.

No Downloads

Total Views

13,924

On Slideshare

0

From Embeds

0

Number of Embeds

0

Shares

0

Downloads

263

Comments

0

Likes

1

No embeds

No notes for slide

- 1. Image Processing NO PRJ TITLE ABSTRACT DOMAIN YOP 1 Perceptual Quality Metric With Internal Generative Mechanism Objective image quality assessment (IQA) aims to evaluate image quality consistently with human perception. Most of the existing perceptual IQA metrics cannot accurately represent the degradations from different types of distortion, e.g., existing structural similarity metrics perform well on contentdependent distortions while not as well as peak signal-to-noise ratio (PSNR) on content- independent distortions. In this paper, we integrate the merits of the existing IQA metrics with the guide of the recently revealed internal generative mechanism (IGM). The IGM indicates that the human visual system actively predicts sensory information and tries to avoid residual uncertainty for image perception and understanding. Inspired by the IGM theory, we adopt an autoregressive prediction algorithm to decompose an input scene into two portions, the predicted portion with the predicted visual content and the disorderly portion with the residual content. Distortions on the predicted portion degrade the primary visual information, and structural similarity procedures are employed to measure its degradation; distortions on the disorderly portion mainly change the uncertain information and the PNSR is employed for it. Finally, according to the noise energy deployment on the two portions, we combine the two evaluation results to acquire the overall quality score. Experimental results on six publicly available databases demonstrate that the proposed metric is comparable with the state-of-the-art quality metrics. Image Processing 2013 2 Local Edge- Preserving Multiscale Decomposition for High Dynamic Range Image Tone Mapping Local energy pattern, a statistical histogram-based representation, is proposed for texture classification. First, we use normalized local-oriented energies to generate local feature vectors, which describe the local structures distinctively and are less sensitive to imaging conditions. Then, each local feature vector is quantized by self-adaptive quantization thresholds determined in the learning stage using histogram specification, and the quantized local feature vector is transformed to a number by N- nary coding, which helps to preserve more structure information during vector quantization. Finally, the frequency histogram is used as the representation feature. The performance is benchmarked by material categorization on KTH-TIPS and KTH-TIPS2-a databases. Our method is compared with typical statistical approaches, such as basic image features, local binary pattern (LBP), local ternary pattern, completed LBP,Weber local descriptor, and VZ algorithms (VZ-MR8 and VZ-Joint). The results show that our method is superior to other methods on the KTH-TIPS2-a database, and achieving competitive performance on the KTH-TIPS database. Furthermore, we extend the representation from static image to dynamic texture, and achieve favorable recognition results on the University of California at Los Angeles (UCLA) dynamic texture database. Image Processing 2013 3 Fast Positive Deconvolution of Hyperspectral Images In this brief, we provide an efficient scheme for performing deconvolution of large hyperspectral images under a positivity constraint, while accounting for spatial and spectral smoothness of the data. Image Processing 4 Fuzzy C-Means Clustering With Local Information and Kernel Metric for Image Segmentation In this paper, we present an improved fuzzy C-means (FCM) algorithm for image segmentation by introducing a tradeoff weighted fuzzy factor and a kernel metric. The tradeoff weighted fuzzy factor depends on the space distance of all neighboring pixels and their gray-level difference simultaneously. By using this factor, the new algorithm can accurately estimate the damping extent of neighboring pixels. In order to further enhance its robustness to noise and outliers, we introduce a kernel distance measure to its objective function. The new algorithm adaptively determines the kernel parameter by using a fast bandwidth selection rule based on the distance variance of all data points in the collection. Furthermore, the tradeoff weighted fuzzy factor and the kernel distance measure are both parameter free. Experimental results on synthetic and real images show that the new algorithm is effective and efficient, and is relatively independent of this type of noise. Image Processing 2013 #56, II Floor, Pushpagiri Complex, 17th Cross 8th Main, Opp Water Tank,Vijaynagar,Bangalore-560040. Website: www.citlprojects.com, Email ID: projects@citlindia.com,hr@citlindia.com MOB: 9886173099 / 9986709224, PH : 080 -23208045 / 23207367 MATLAB – 2013 ((Image Processing, Wireless Sensor Network, Power Electronics, Signal Processing, Power System, Communication, Wireless communication, Geoscience & Remote sensing) )
- 2. 5 Image-Difference Prediction: From Grayscale to Color Existing image-difference measures show excellent accuracy in predicting distortions, such as lossy compression, noise, and blur. Their performance on certain other distortions could be improved; one example of this is gamut mapping. This is partly because they either do not interpret chromatic information correctly or they ignore it entirely. We present an image-difference framework that comprises image normalization, feature extraction, and feature combination. Based on this framework, we create image-difference measures by selecting specific implementations for each of the steps. Particular emphasis is placed on using color information to improve the assessment of gamut-mapped images. Our best image-difference measure shows significantly higher prediction accuracy on a gamut-mapping dataset than all other evaluated measures. Index Terms—Color, image difference, image quality. Image Processing 2013 6 Modified Gradient Search for Level Set Based Image Segmentation Abstract—Level set methods are a popular way to solve the image segmentation problem. The solution contour is found by solving an optimization problem where a cost functional is minimized. Gradient descent methods are often used to solve this optimization problem since they are very easy to implement and applicable to general nonconvex functionals. They are, however, sensitive to local minima and often display slow convergence. Traditionally, cost functionals have been modified to avoid these problems. In this paper, we instead propose using two modified gradient descent methods, one using a momentum term and one based on resilient propagation. These methods are commonly used in the machine learning community. In a series of 2- D/3-D-experiments using real and synthetic data with ground truth, the modifications are shown to reduce the sensitivity for local optima and to increase the onvergence rate. The parameter sensitivity is also investigated. The proposed methods are very simple modifications of the basic method, and are directly compatible with any type of level set implementation. Downloadable reference code with examples is available online. Image Processing 2013 7 Variational Approach for the Fusion of Exposure Bracketed Pairs When taking pictures of a dark scene with artificial lighting, ambient light is not sufficient for most cameras to obtain both accurate color and detail information. The exposure bracketing feature usually available in many camera models enables the user to obtain a series of pictures taken in rapid succession with different exposure times; the implicit idea is that the user picks the best image from this set. But in many cases, none of these images is good enough; in general, good brightness and color information are retained from longer-exposure settings, whereas sharp details are obtained from shorter ones. In this paper, we propose a variational method for automatically combining an exposure- bracketed pair of images within a single picture that reflects the desired properties of each one. We introduce an energy functional consisting of two terms, one measuring the difference in edge information with the short-exposure image and the other measuring the local color difference with a warped version of the long-exposure image. This method is able to handle camera and subject motion as well as noise, and the results compare favorably with the state of the art. Image Processing 2013 8 Catching a Rat by Its Edglets Computer vision is a noninvasive method for monitoring laboratory animals. In this article, we propose a robust tracking method that is capable of extracting a rodent from a frame under uncontrolled normal laboratory conditions. The method consists of two steps. First, a sliding window combines three features to coarsely track the animal. Then, it uses the edglets of the rodent to adjust the tracked region to the animal’s boundary. The method achieves an average tracking error that is smaller than a representative state-of-the-art method. Image Processing 2013 9 Image Denoising With Dominant Sets by a Coalitional Game Approach Dominant sets are a new graph partition method for pairwise data clustering proposed by Pavan and Pelillo. We address the problem of dominant sets with a coalitional game model, in which each data point is treated as a player and similar data points are encouraged to group together for cooperation. We propose betrayal and hermit rules to describe the cooperative behaviors among the players. After applying the betrayal and hermit rules, an optimal and stable graph partition emerges, and all the players in the partition will not change their groups. For computational feasibility, we design an approximate algorithm for finding a dominant set of mutually similar players and then apply the algorithm to an application such as image denoising. In image denoising, every pixel is treated as a player who seeks similar partners according to its patch appearance in its local neighborhood. By averaging the noisy effects with the similar pixels in the dominant sets, we improve nonlocal means image denoising to restore the intrinsic structure of the original images and achieve competitive denoising results with the state-of-the-art methods in visual and quantitative qualities. Image Processing 2013 10 Human Detection in Images via Piecewise Linear Support Vector Machines Human detection in images is challenged by the view and posture variation problem. In this paper, we propose a piecewise linear support vector machine (PL-SVM) method to tackle this problem. The motivation is to exploit the piecewise discriminative function to construct a nonlinear classification boundary that can discriminate multiview and multiposture human bodies from the backgrounds in a high-dimensional feature space. A PL-SVM training is designed as an iterative procedure of feature space division and linear SVM training, aiming at the margin maximization of local linear SVMs. Each piecewise SVM model is responsible for a subspace, corresponding to a human cluster of a special view or posture. In the PL-SVM, a cascaded detector is proposed with block rientation features and a histogram of oriented gradient features. Extensive experiments show that compared with several recent SVM methods, our method reaches the state of the art in both detection accuracy and computational efficiency, and it performs best when dealing with low-resolution human regions in clutter backgrounds. Image Processing 2013
- 3. 11 Nonedge- Specific Adaptive Scheme for Highly Robust Blind Motion Deblurring of Natural Imagess Blind motion deblurring estimates a sharp image from a motion blurred image without the knowledge of the blur kernel. Although significant progress has been made on tackling this problem, existing methods, when applied to highly diverse natural images, are still far from stable. This paper focuses on the robustness of blind motion deblurring methods toward image diversity—a critical problem that has been previously neglected for years. We classify the existing methods into two schemes and analyze their robustness using an image set consisting of 1.2 million natural images. The first scheme is edge- specific, as it relies on the detection and prediction of large-scale step edges. This scheme is sensitive to the diversity of the image edges in natural images. The second scheme is nonedge-specific and explores various image statistics, such as the prior distributions. This scheme is sensitive to statistical variation over different images. Based on the analysis, we address the robustness by proposing a novel nonedge-specific adaptive scheme (NEAS), which features a new prior that is adaptive to the variety of textures in natural images. By comparing the performance of NEAS against the existing methods on a very large image set, we demonstrate its advance beyond the state-of-the-art. Image Processing 2013 12 Missing Texture Reconstruction Method Based on Error Reduction Algorithm Using Fourier Transform Magnitude Estimation Scheme A missing texture reconstruction method based on an error reduction (ER) algorithm, including a novel estimation scheme of Fourier transform magnitudes is presented in this brief. In our method, Fourier transform magnitude is estimated for a target patch including missing areas, and the missing intensities are estimated by retrieving its phase based on the ER algorithm. Specifically, by monitoring errors converged in the ER algorithm, known patches whose Fourier transform magnitudes are similar to that of the target patch are selected from the target image. In the second approach, the Fourier transform magnitude of the target patch is estimated from those of the selected known patches and their corresponding errors. Consequently, by using the ER algorithm, we can estimate both the Fourier transform magnitudes and phases to reconstruct the missing areas. Image Processing 2013 13 Video Deblurring Algorithm Using Accurate Blur Kernel Estimation and Residual Deconvolution Based on a Blurred- Unburned Frame Pair Blurred frames may happen sparsely in a video sequence acquired by consumer devices such as digital camcorders and digital cameras. In order to avoid visually annoying artifacts due to those blurred frames, this paper presents a novel motion deblurring algorithm in which a blurred frame can be reconstructed utilizing the high-resolution information of adjacent unblurred frames. First, a motion- compensated predictor for the blurred frame is derived from its neighboring unblurred frame via specific motion estimation. Then, an accurate blur kernel, which is difficult to directly obtain from the blurred frame itself, is computed using both the predictor and the blurred frame. Next, a residual deconvolution is applied to both of those frames in order to reduce the ringing artifacts inherently caused by conventional deconvolution. The blur kernel estimation and deconvolution processes are iteratively performed for the deblurred frame. Simulation results show that the proposed algorithm provides superior deblurring results over conventional deblurring algorithms while preserving details and reducing ringing artifacts. Image Processing 2013 14 Comments on ―A Robust Fuzzy Local Information C- Means Clustering Algorithm‖ In a recent paper, Krinidis and Chatzis proposed a variation of fuzzy c-means algorithm for image clustering. The local spatial and gray-level information are incorporated in a fuzzy way through an energy function. The local minimizers of the designed energy function to obtain the fuzzy membership of each pixel and cluster centers are proposed. In this paper, it is shown that the local minimizers of Krinidis and Chatzis to obtain the fuzzy membership and the cluster centers in an iterative manner are not exclusively solutions for true local minimizers of their designed energy function. Thus, the local minimizers of Krinidis and Chatzis do not converge to the correct local minima of the designed energy function not because of tackling to the local minima, but because of the design of energy function. Image Processing 2013 15 Multiscale Image Fusion Using the Undecimated Wavelet Transform With Spectral Factorization and Nonorthogonal Filter Banks Multiscale transforms are among the most popular techniques in the field of pixel-level image fusion. However, the fusion performance of these methods often deteriorates for images derived from different sensor modalities. In this paper, we demonstrate that for such images, results can be improved using a novel undecimated wavelet transform (UWT)- based fusion scheme, which splits the image decomposition process into two successive filtering operations using spectral factorization of the analysis filters. The actual fusion takes place after convolution with the first filter pair. Its significantly smaller support size leads to the minimization of the unwanted spreading of coefficient values around overlapping image singularities. This usually complicates the feature selection process and may lead to the introduction of reconstruction errors in the fused image. Moreover, we will show that the nonsubsampled nature of the UWT allows the design of nonorthogonal filter banks, which are more robust to artifacts introduced during fusion, additionally improving the obtained results. The combination of these techniques leads to a fusion framework, which provides clear advantages over traditional multiscale fusion approaches, independent of the underlying fusion rule, and reduces unwanted side effects such as ringing artifacts in the fused reconstruction. Image Processing 2013
- 4. 16 Efficient Contrast Enhancement Using Adaptive Gamma Correction With Weighting Distribution This paper proposes an efficient method to modify histograms and enhance contrast in digital images. Enhancement plays a significant role in digital image processing, computer vision, and pattern recognition. We present an automatic transformation technique that improves the brightness of dimmed images via the gamma correction and probability distribution of luminance pixels. To enhance video, the proposed image enhancement method uses temporal information regarding the differences between each frame to reduce computational complexity. Experimental results demonstrate that the proposed method produces enhanced images of comparable or higher quality than those produced using previous state-of-the-art methods. Index Terms—Contrast enhancement, gamma correction, histogram equalization, histogram modification. Image Processing 2013 17 Wavelet Bayesian Network Image Denoising From the perspective of the Bayesian approach, the denoising problem is essentially a prior probability modeling and estimation task. In this paper, we propose an approach that exploits a hidden Bayesian network, constructed from wavelet coefficients, to model the prior probability of the original image. Then, we use the belief propagation (BP) algorithm, which estimates a coefficient based on all the coefficients of an image, as the maximum-a-posterior (MAP) estimator to derive the denoised wavelet coefficients. We show that if the network is a spanning tree, the standard BP algorithm can perform MAP estimation efficiently. Our experiment results demonstrate that, in terms of the peak-signal-to- noise-ratio and perceptual quality, the proposed approach outperforms state-of-the-art algorithms on several images, particularly in the textured regions, with various amounts of white Gaussian noise. Image Processing 2013 18 Nonlinearity Detection in Hyperspectral Images Using a Polynomial Post- Nonlinear Mixing Model This paper studies a nonlinear mixing model for hyperspectral image unmixing and nonlinearity detection. The proposed model assumes that the pixel reflectances are nonlinear functions of pure spectral components contaminated by an additive white Gaussian noise. These nonlinear functions are approximated by polynomials leading to a polynomial post-nonlinear mixing model. We have shown in a previous paper that the parameters involved in the resulting model can be estimated using least squares methods. A generalized likelihood ratio test based on the estimator of the nonlinearity parameter is proposed to decide whether a pixel of the image results from the commonly used linear mixing model or from a more general nonlinear mixing model. To compute the test statistic associated with the nonlinearity detection, we propose to approximate the variance of the estimated nonlinearity parameter by its constrained Cramér–Rao bound. The performance of the detection strategy is valuated via simulations conducted on synthetic and real data. More precisely, synthetic data have been generated according to the standard linear mixing model and three nonlinear models from the literature. The real data investigated in this study are extracted from the Cuprite image, which shows that some minerals seem to be nonlinearly mixed in this image. Finally, it is interesting to note that the estimated abundance maps obtained with the post-nonlinear mixing model are in good agreement with results obtained in previous studies. Image Processing 2013 19 Image Quality Assessment Using Multi- Method Fusion A new methodology for objective image quality assessment (IQA) with multi-method fusion (MMF) is presented in this paper. The research is motivated by the observation that there is no single method that can give the best performance in all situations. To achieve MMF, we adopt a regression approach. The new MMF score is set to be the nonlinear combination of scores from multiple methods with suitable weights obtained by a training process. In order to improve the regression results further, we divide distorted images into three to five groups based on the distortion types and perform regression within each group, which is called ―context-dependent MMF‖ (CD-MMF). One task in CD-MMF is to determine the context automatically, which is achieved by a machine learning approach. To further reduce the complexity of MMF, we perform algorithms to select a small subset from the candidate method set. The result is very good even if only three quality assessment methods are included in the fusion process. The proposed MMF method using support vector regression is shown to outperform a large number of existing IQA methods by a significant margin when being tested in six representative databases. Image Processing 2013 20 Unified Blind Method for Multi-Image Super-Resolution and Single/Multi- Image Blur Deconvolution This paper presents, for the first time, a unified blind method for multi-image super-resolution (MISR or SR), single-image blur deconvolution (SIBD), and multi-image blur deconvolution (MIBD) of low-resolution (LR) images degraded by linear space-invariant (LSI) blur, aliasing, and additive white Gaussian noise (AWGN). The proposed approach is based on alternating minimization (AM) of a new cost function with respect to the unknown high-resolution (HR) image and blurs. The regularization term for the HR image is based upon the Huber-Markov random field (HMRF) model, which is a type of variational integral that exploits the piecewise smooth nature of the HR image. The blur estimation process is supported by an edge-emphasizing smoothing operation, which improves the quality of blur estimates by enhancing strong soft edges toward step edges, while filtering out weak structures. The parameters are updated gradually so that the number of salient edges used for blur estimation increases at each iteration. For better performance, the blur estimation is done in the filter domain rather than the pixel domain, i.e., using the gradients of the LR and HR images. The regularization term for the blur is Gaussian (L2 norm), which allows for fast noniterative optimization in the frequency domain. We accelerate the processing time of SR reconstruction by separating the upsampling and registration processes from the optimization procedure. Simulation results on both synthetic and real-life images (from a novel computational imager) confirm the robustness and effectiveness of the proposed method. Image Processing 2013
- 5. 21 In-Plane Rotation and Scale Invariant Clustering Using Dictionaries In this paper, we present an approach that simultaneously clusters images and learns dictionaries from the clusters. The method learns dictionaries and clusters images in the radon transform domain. The main feature of the proposed approach is that it provides both in-plane rotation and scale invariant clustering, which is useful in numerous applications, including content-based image retrieval (CBIR). We demonstrate the effectiveness of our rotation and scale invariant clustering method on a series of CBIR experiments. Experiments are performed on the Smithsonian isolated leaf, Kimia shape, and Brodatz texture datasets. Our method provides both good retrieval performance and greater robustness compared to standard Gabor-based and three state-of-the-art shape-based methods that have similar objectives. Image Processing 2013 22 Analysis Operator Learning and its Application to Image Reconstruction Practical image-acquisition systems are often modeled as a continuous-domain prefilter followed by an ideal sampler, where generalized samples are obtained after convolution with the impulse response of the device. In this paper, our goal is to interpolate images from a given subset of such samples. We express our solution in the continuous domain, considering consistent resampling as a data-fidelity constraint. To make the problem well posed and ensure edge-preserving solutions, we develop an efficient anisotropic regularization approach that is based on an improved version of the edgeenhancing anisotropic diffusion equation. Following variational principles, our econstruction algorithm minimizes successive quadratic cost functionals. To ensure fast convergence, we solve the corresponding sequence of linear problems by using multigrid iterations that are specifically tailored to their sparse structure. We conduct illustrative experiments and discuss the potential of our approach both in terms of algorithmic design and reconstruction quality. In particular, we present results that use as little as 2% of the image samples. Image Processing 2013 23 Robust Ellipse Fitting Based on Sparse Combination of Data Points Ellipse fitting is widely applied in the fields of computer vision and automatic industry control, in which the procedure of ellipse fitting often follows the preprocessing step of edge detection in the original image. Therefore, the ellipse fitting method also depends on the accuracy of edge detection besides their own performance, especially due to the introduced outliers and edge point errors from edge detection which will cause severe performance degradation. In this paper, we develop a robust ellipse fitting method to alleviate the influence of outliers. The proposed algorithm solves ellipse parameters by linearly combining a subset of (―more accurate‖) data points (formed from edge points) rather than all data points (which contain possible outliers). In addition, considering that squaring the fitting residuals can magnify the contributions of these extreme data points, our algorithm replaces it with the absolute residuals to reduce this influence. Moreover, the norm of data point errors is bounded, and the worst case performance optimization is formed to be robust against data point errors. The resulting mixed l1–l2 optimization problem is further derived as a secondorder cone programming one and solved by the computationally efficient interior-point methods. Note that the fitting approach developed in this paper specifically deals with the overdetermined system, whereas the current sparse representation theory is only applied to underdetermined systems. Therefore, the proposed algorithm can be looked upon as an extended application and development of the sparse representation theory. Some simulated and experimental examples are presented to illustrate the effectiveness of the proposed ellipse fitting approach. Index Terms—Diameter control, edge points, ellipse fitting, Iris recognition, least squares (LS), minimax criterion, outliers, overdetermined system, silicon single crystal, sparse representation. Image Processing 2013 24 Learning Dynamic Hybrid Markov Random Field for Image Labeling Using shape information has gained increasing concerns in the task of image labeling. In this paper, we present a dynamic hybridMarkov random field (DHMRF), which explicitly captures middle-level object shape and low-level visual appearance (e.g., texture and color) for image labeling. Each node in DHMRF is described by either a deformable template or an appearance model as visual prototype. On the other hand, the edges encode two types of intersections: co-occurrence and spatial layered context, with respect to the labels and prototypes of connected nodes. To learn the DHMRF model, an iterative algorithm is designed to automatically select the most informative features and estimate model parameters. The algorithm achieves high computational efficiency since a branch-and-bound schema is introduced to estimate model parameters. Compared with previous methods, which usually employ implicit shape cues, our DHMRF model seamlessly integrates color, texture, and shape cues to inference labeling output, and thus produces more accurate and reliable results. Extensive experiments validate its superiority over other state-of-the-art methods in terms of recognition accuracy and implementation efficiency on: 1) the MSRC 21-class dataset, and 2) the lotus hill institute 15-class dataset. Image Processing 2013
- 6. 25 Coupled Variational Image Decomposition and Restoration Model for Blurred Cartoon- Plus-Texture Images With Missing Pixels In this paper, we develop a decomposition model to restore blurred images with missing pixels. Our assumption is that the underlying image is the superposition of cartoon and texture components. We use the total variation norm and its dual norm to regularize the cartoon and texture, respectively. We recommend an efficient numerical algorithm based on the splitting versions of augmented Lagrangian method to solve the problem. Theoretically, the existence of a minimizer to the energy function and the convergence of the algorithm are guaranteed. In contrast to recently developed methods for deblurring images, the proposed algorithm not only gives the restored image, but also gives a decomposition of cartoon and texture parts. These two parts can be further used in segmentation and inpainting problems. Numerical comparisons between this algorithm and some state-of-the-art methods are also reported. Image Processing 2013 26 Computationally Tractable Stochastic Image Modeling Based on Symmetric Markov Mesh Random Fields In this paper, the properties of a new class of causal Markov random fields, named symmetric Markov mesh random field, are initially discussed. It is shown that the symmetric Markov mesh random fields from the upper corners are equivalent to the symmetric Markov mesh random fields from the lower corners. Based on this new random field, a symmetric, corner-independent, and isotropic image model is then derived which incorporates the dependency of a pixel on all its neighbors. The introduced image model comprises the product of several local 1D density and 2D joint density functions of pixels in an image thus making it computationally tractable and practically feasible by allowing the use of histogram and joint histogram approximations to estimate the model parameters. An image restoration application is also presented to confirm the effectiveness of the model developed. The experimental results demonstrate that this new model provides an improved tool for image modeling purposes compared to the conventional Markov random field models. Image Processing 2013 27 Image Sharpness Assessment Based on Local Phase Coherence Abstract—Sharpness is an important determinant in visual assessment of image quality. The human visual system is able to effortlessly detect blur and evaluate sharpness of visual images, but the underlying mechanism is not fully understood. Existing blur/sharpness evaluation algorithms are mostly based on edge width, local gradient, or energy reduction of global/local high frequency content. Here we understand the subject from a different perspective, where sharpness is identified as strong local phase coherence (LPC) near distinctive image features evaluated in the complex wavelet transform domain. Previous LPC computation is restricted to be applied to complex coefficients spread in three consecutive dyadic scales in the scale-space. Here we propose a flexible framework that allows for LPC computation in arbitrary fractional scales. We then develop a new sharpness assessment algorithm without referencing the original image. We use four subject-rated publicly available image databases to test the proposed algorithm, which demonstrates competitive performance when compared with state-of-the-art algorithms. Image Processing 2013 28 Colorization- Based Compression Using Optimization In this paper, we formulate the colorization-based coding problem into an optimization problem, i.e., an L1 minimization problem. In colorization-based coding, the encoder chooses a few representative pixels (RP) for which the chrominance values and the positions are sent to the decoder, whereas in the decoder, the chrominance values for all the pixels are reconstructed by colorization methods. The main issue in colorization-based coding is how to extract the RP well therefore the compression rate and the quality of the reconstructed color image becomes good. By formulating the colorization-based coding into an L1 minimization problem, it is guaranteed that, given the colorization matrix, the chosen set of RP becomes the optimal set in the sense that it minimizes the error between the original and the reconstructed color image. In other words, for a fixed error value and a given colorization matrix, the chosen set of RP is the smallest set possible. We also propose a method to construct the colorization matrix that colorizes the image in a multiscale manner. This, combined with the proposed RP extraction method, allows us to choose a very small set of RP. It is shown experimentally Image Processing 2013 29 A Generalized Random Walk With Restart and Its Application in Depth Up- Sampling and Interactive Segmentation In this paper, the origin of random walk with restart (RWR) and its generalization are described. It is well known that the random walk (RW) and the anisotropic diffusion models share the same energy functional, i.e., the former provides a steady-state solution and the latter gives a flow solution. In contrast, the theoretical background of the RWR scheme is different from that of the diffusion-reaction equation, although the restarting term of the RWR plays a role similar to the reaction term of the diffusion-reaction equation. The behaviors of the two approaches with respect to outliers reveal that they possess different attributes in terms of data propagation. This observation leads to the derivation of a new energy functional, where both volumetric heat capacity and thermal conductivity are considered together, and provides a common framework that unifies both the RW and the RWR approaches, in addition to other regularization methods. The proposed framework allows the RWR to be generalized (GRWR) in semilocal and nonlocal forms. The experimental results demonstrate the superiority of GRWR over existing regularization approaches in terms of depth map up-sampling and interactive image segmentation. Image Processing 2013
- 7. 30 Library-Based Illumination Synthesis for Critical CMOS Patterning In optical microlithography, the illumination source for critical complementary metal–oxide– semiconductor layers needs to be determined in the early stage of a technology node with very limited design information, leading to simple binary shapes. Recently, the availability of freeform sources permits us to increase pattern fidelity and relax mask complexities with minimal insertion risks to the current manufacturing flow. However, source optimization across many patterns is often treated as a design- of-experiments problem, which may not fully exploit the benefits of a freeform source. In this paper, a rigorous source-optimization algorithm is presented via linear superposition of optimal sources for pre- selected patterns. We show that analytical solutions are made possible by using Hopkins formulation and quadratic programming. The algorithm allows synthesized illumination to be linked with assorted pattern libraries, which has a direct impact on design rule studies for early planning and design automation for full wafer optimization. Image Processing 2013 31 Variational Optical Flow Estimation Based on Stick Tensor Voting Variational optical flow techniques allow the estimation of flow fields from spatio-temporal derivatives. They are based on minimizing a functional that contains a data term and a regularization term. Recently, numerous approaches have been presented for improving the accuracy of the estimated flow fields. Among them, tensor voting has been shown to be particularly effective in the preservation of flow discontinuities. This paper presents an adaptation of the data term by using anisotropic stick tensor voting in order to gain robustness against noise and outliers with significantly lower computational cost than (full) tensor voting. In addition, an anisotropic complementary smoothness term depending on directional information estimated through stick tensor voting is utilized in order to preserve discontinuity capabilities of the estimated flow fields. Finally, a weighted non-local term that depends on both the estimated directional information and the occlusion state of pixels is integrated during the optimization process in order to denoise the final flow field. The proposed approach yields state-of-the-art results on the Middlebury benchmark. Image Processing 2013 32 GPU Accelerated Edge-Region Based Level Set Evolution Constrained by 2D Gray-Scale Histogram Abstract—Due to its intrinsic nature which allows to easily handle complex shapes and topological changes, the level set method (LSM) has been widely used in image segmentation. Nevertheless, LSM is computationally expensive, which limits its applications in real-time systems. For this purpose, we propose a new level set algorithm, which uses simultaneously edge, region, and 2D histogram information in order to efficiently segment objects of interest in a given scene. The computational complexity of the proposed LSM is greatly reduced by using the highly parallelizable lattice Boltzmann method (LBM) with a body force to solve the level set equation (LSE). The body force is the link with image data and is defined from the proposed LSE. The proposed LSM is then implemented using an NVIDIA graphics processing units to fully take advantage of the LBM local nature. The new algorithm is effective, robust against noise, independent to the initial contour, fast, and highly parallelizable. The edge and region information enable to detect objects with and without edges, and the 2D histogram information enable the effectiveness of the method in a noisy environment. Experimental results on synthetic and real images demonstrate subjectively and objectively the performance of the proposed method. Image Processing 2013 33 Orientation Imaging Microscopy With Optimized Convergence Angle Using CBED Patterns in TEMs Grain size statistics, texture, and grain boundary distribution are microstructural characteristics that greatly influence materials properties. These characteristics can be derived from an orientation map obtained using orientation imaging microscopy (OIM) techniques. The OIM techniques are generally performed using a transmission electron microscopy (TEM) for nanomaterials. Although some of these techniques have limited applicability in certain situations, others have limited availability because of external hardware required. In this paper, an automated method to generate orientation maps using convergence beam electron diffraction patterns obtained in a conventional TEM setup is presented. This method is based upon dynamical diffraction theory that describes electron diffraction more accurately as compared with kinematical theory used by several existing OIM techniques. In addition, the method of this paper uses wide angle convergent beam electron diffraction for performing OIM. It is shown in this paper that the use of the wide angle convergent electron beam provides additional information that is not available otherwise. Together, the presented method exploits the additional information and combines it with the calculations from the dynamical theory to provide accurate orientation maps in a conventional TEM setup. The automated method of this paper is applied to a platinum thin film sample. The presented method correctly identified the texture preference in the sample. Image Processing 2013
- 8. 34 Multivariate Slow Feature Analysis and Decorrelation Filtering for Blind Source Separation We generalize the method of Slow Feature Analysis (SFA) for vector-valued functions of several variables and apply it to the problem of blind source separation, in particular to image separation. It is generally necessary to use multivariate SFA instead of univariate SFA for separating multi- dimensional signals. For the linear case, an exact mathematical analysis is given, which shows in particular that the sources are perfectly separated by SFA if and only if they and their first-order derivatives are uncorrelated. When the sources are correlated, we apply the following technique called Decorrelation Filtering: use a linear filter to decorrelate the sources and their derivatives in the given mixture, then apply the unmixing matrix obtained on the filtered mixtures to the original mixtures. If the filtered sources are perfectly separated by this matrix, so are the original sources. A decorrelation filter can be numerically obtained by solving a nonlinear optimization problem. This technique can also be applied to other linear separation methods, whose output signals are decorrelated, such as ICA. When there are more mixtures than sources, one can determine the actual number of sources by using a regularized version of SFA with decorrelation filtering. Extensive numerical experiments using SFA and ICA with decorrelation filtering, supported by athematical analysis, demonstrate the potential of our methods for solving problems involving blind source separation. Image Processing 2013 35 A Variational Approach for Pan-Sharpening Pan-sharpening is a process of acquiring a high resolution multispectral (MS) image by combining a low resolution MS image with a corresponding high resolution panchromatic (PAN) image. In this paper, we propose a new variational pansharpening method based on three basic assumptions: 1) the gradient of PAN image could be a linear combination of those of the pan-sharpened image bands; 2) the upsampled low resolution MS image could be a degraded form of the pan-sharpened image; and 3) the gradient in the spectrum direction of pan-sharpened image should be approximated to those of the upsampled low resolution MS image. An energy functional, whose minimize is related to the best pan- sharpened result, is built based on these assumptions. We discuss the existence of minimizer of our energy and describe the numerical procedure based on the split Bregman algorithm. To verify the effectiveness of our method, we qualitatively and quantitatively compare it with some state-of-the-art schemes using QuickBird and IKONOS data. Particularly, we classify the existing quantitative measures into four categories and choose two representatives in each category for more reasonable quantitative evaluation. The results demonstrate the effectiveness and stability of our method in terms of the related evaluation benchmarks. Besides, the computation efficiency comparison with other variational methods also shows that our method is remarkable. Image Processing 2013 36 Segment Adaptive Gradient Angle Interpolation We introduce a new edge-directed interpolator based on locally defined, straight line approximations of image isophotes. Spatial derivatives of image intensity are used to describe the principal behavior of pixel-intersecting isophotes in terms of their slopes. The slopes are determined by inverting a tridiagonal matrix and are forced to vary linearly from pixel-to-pixel within segments. Image resizing is performed by interpolating along the approximated isophotes. The proposed method can accommodate arbitrary scaling factors, provides state-of-the-art results in terms of PSNR as well as other quantitative visual quality metrics, and has the advantage of reduced computational complexity that is directly proportional to the number of pixels. Image Processing 2013 37 Texture Enhanced Histogram Equalization Using TV-L1 Image Decomposition Histogram transformation defines a class of image processing operations that are widely applied in the implementation of data normalization algorithms. In this paper, we present a new variational approach for image enhancement that is constructed to alleviate the intensity saturation effects that are introduced by standard contrast enhancement (CE) methods based on histogram equalization. In this paper, we initially apply total variation (TV) minimization with a L1 fidelity term to decompose the input image with respect to cartoon and texture components. Contrary to previous papers that rely solely on the information encompassed in the distribution of the intensity information, in this paper, the texture information is also employed to emphasize the contribution of the local textural features in the CE process. This is achieved by implementing a nonlinear histogram warping CE strategy that is able to maximize the information content in the transformed image. Our experimental study addresses the CE of a wide variety of image data and comparative evaluations are provided to illustrate that our method produces better results than conventional CE strategies. Image Processing 2013 38 Novel True- Motion Estimation Algorithm and Its Application to Motion Compensated Temporal Frame Interpolation In this paper, a new low-complexity true-motion estimation (TME) algorithm is proposed for video processing applications, such as motion-compensated temporal frame interpolation (MCTFI) or motion-compensated frame rate up-conversion (MCFRUC). Regular motion estimation, which is often used in video coding, aims to find the motion vectors (MVs) to reduce the temporal redundancy, whereas TME aims to track the projected object motion as closely as possible. TME is obtained by imposing implicit and/or explicit smoothness constraints on the block-matching algorithm. To produce better quality-interpolated frames, the dense motion field at interpolation time is obtained for both forward and backward MVs; then, bidirectional motion compensation using forward and backward MVs is applied by mixing both elegantly. Finally, the performance of the proposed algorithm for MCTFI is demonstrated against recently proposed methods and smoothness constraint optical flow employed by a professional video production suite. Experimental results show that the quality of the interpolated frames using the proposed method is better when compared with the MCFRUC techniques. Image Processing 2013
- 9. 39 Nonlocal Regularization of Inverse Problems: A Unified Variational Framework We introduce a unifying energy minimization framework for nonlocal regularization of inverse problems. In contrast to the weighted sum of square differences between image pixels used by current schemes, the proposed functional is an unweighted sum of inter-patch distances. We use robust distance metrics that promote the averaging of similar patches, while discouraging the averaging of dissimilar patches. We show that the first iteration of a majorize–minimize algorithm to minimize the proposed cost function is similar to current nonlocal methods. The reformulation thus provides a theoretical justification for the heuristic approach of iterating nonlocal schemes, which reestimate the weights from the current image estimate. Thanks to the reformulation, we now understand that the widely reported alias amplification associated with iterative nonlocal methods are caused by the convergence to local minimum of the nonconvex penalty. We introduce an efficient continuation strategy to overcome this problem. The similarity of the proposed criterion to widely used nonquadratic penalties (e.g., total variation and _ p semi-norms) opens the door to the adaptation of fast algorithms developed in the context of compressive sensing; we introduce several novel algorithms to solve the proposed nonlocal optimization problem. Thanks to the unifying framework, these fast algorithms are readily applicable for a large class of distance metrics. Image Processing 40 Image Inpainting on the Basis of Spectral Structure from 2- D Nonharmonic Analysis The restoration of images by digital inpainting is an active field of research and such algorithms are, in fact, now widely used. Conventional methods generally apply textures that are most similar to the areas around the missing region or use a large image database. However, this produces discontinuous textures and thus unsatisfactory results. Here, we propose a new technique to overcome this limitation by using signal prediction based on the nonharmonic analysis (NHA) technique proposed by the authors. NHA can be used to extract accurate spectra, irrespective of the window function, and its frequency resolution is less than that of the discrete Fourier transform. The proposed 0. method sequentially generates new textures on the basis of the spectrum obtained by NHA. Missing regions from the spectrum are repaired using an improved cost function for 2D NHA. The proposed method is evaluated using the standard images Lena, Barbara, Airplane, Pepper, and Mandrill. The results show an improvement in MSE of about 10 ∼ 20 compared with the examplar-based method and good subjective quality. Image Processing 41 Image Completion by Diffusion Maps and Spectral Relaxation We present a framework for image inpainting that utilizes the diffusion framework approach to spectral dimensionality reduction. We show that on formulating the inpainting problem in the embedding domain, the domain to be inpainted is smoother in general, particularly for the textured images. Thus, the textured images can be inpainted through simple exemplarbased and variational methods. We discuss the properties of the induced smoothness and relate it to the underlying assumptions used in contemporary inpainting schemes. As the diffusion embedding is nonlinear and noninvertible, we propose a novel computational approach to approximate the inverse mapping from the inpainted embedding space to the image domain. We formulate the mapping as a discrete optimization problem, solved through spectral relaxation. The effectiveness of the presented method is exemplified by inpainting real images, where it is shown to compare favorably with contemporary state-of-the-art schemes. Image Processing 2013 42 Gaussian Blurring- Invariant Comparison of Signals and Images We present a Riemannian framework for analyzing signals and images in a manner that is invariant to their level of blurriness, under Gaussian blurring. Using a well known relation between Gaussian blurring and the heat equation, we establish an action of the blurring group on image space and define an orthogonal section of this action to represent and compare images at the same blur level. This comparison is based on geodesic distances on the section manifold which,in turn, are computed using a path-straightening algorithm. The actual implementations use coefficients of images under a truncated orthonormal basis and the blurring action corresponds to exponential decays of these coefficients. We demonstrate this framework using a number of experimental results, involving 1D signals and 2D images. As a specific application, we study the effect of blurring on the recognition performance when 2D facial images are used for recognizing people. Image Processing 2013 43 Corner Detection and Classification Using Anisotropic Directional Derivative Representations This paper proposes a corner detector and classifier using anisotropic directional derivative (ANDD) representations. The ANDD representation at a pixel is a function of the oriented angle and characterizes the local directional grayscale variation around the pixel. The proposed corner detector fuses the ideas of the contour- and intensity-based detection. It consists of three cascaded blocks. First, the edge map of an image is obtained by the Canny detector and from which contours are extracted and patched. Next, the ANDD representation at each pixel on contours is calculated and normalized by its maximal magnitude. The area surrounded by the normalized ANDD representation forms a new corner measure. Finally, the nonmaximum suppression and thresholding are operated on each contour to find corners in terms of the corner measure. Moreover, a corner classifier based on the peak number of the ANDD representation is given. Experiments are made to evaluate the proposed detector and classifier. The proposed detector is competitive with the two recent state-of-the-art corner detectors, the He & Yung detector and CPDA detector, in detection capability and attains higher repeatability under affine transforms. The proposed classifier can discriminate effectively simple corners, Y-type corners, and higher order corners. Image Processing 2013
- 10. 44 Fusion of Multifocus Images to Maximize Image Information When an image of a 3-D scene is captured, only scene parts at the focus plane appear sharp. Scene parts in front of or behind the focus plane appear blurred. In order to create an image where all scene parts appear sharp, it is necessary to capture images of the scene at different focus levels and fuse the images. In this paper, first registration of multifocus images is discussed and then an algorithm to fuse the registered images is described. The algorithm divides the image domain into uniform blocks and for each block identifies the image with the highest contrast. The images selected in this manner are then locally blended to create an image that has overall maximum contrast. Examples demonstrating registration and fusion of multifocus images are given and discussed.. Image Processing 2013 45 Inception of Hybrid Wavelet Transform using Two Orthogonal Transforms and It’s use for Image Compression The paper presents the novel hybrid wavelet transform generation technique using two orthogonal transforms. The orthogonal transforms are used for analysis of global properties of the data into frequency domain. For studying the local properties of the signal, the concept of wavelet transform is introduced, where the mother wavelet function gives the global properties of the signal and wavelet basis functions which are compressed versions of mother wavelet are used to study the local properties of the signal. In wavelets of some orthogonal transforms the global characteristics of the data are hauled out better and some orthogonal transforms might give the local characteristics in better way. The idea of hybrid wavelet transform comes in to picture in view of combining the traits of two different orthogonal transform wavelets to exploit the strengths of both the transform wavelets. Image Processing 2013 46 A Comparative Analysis of Image Fusion Methods There are many image fusion methods that can be used to produce high-resolution mutlispectral images from a high-resolution panchromatic image and low-resolution mut-lispectral images. Starting from the physical principle of image formation, this paper presents a comprehensive framework, the general image fusion (GIF) method, which makes it possible to categorize, compare, and evaluate the existing image fusion methods. Using the GIF method, it is shown that the pixel values of the high- resolution mutlispectral images are determined by the corresponding pixel values of the low-resolution panchromatic image, the approximation of the high-resolution panchromatic image at the low- resolution level. Many of the existing image fusion methods, including, but not limited to, intensity– hue–saturation Brovey transform, principal component analysis, high-pass filtering, high-pass modulation, the à trous algorithm-based modulation (MRAIM), are evaluated and found to be particular cases of the GIF method. The performance of each image fusion method is theoretically analyzed based on how the corresponding low-resolution panchromatic image is computed and how the modulation coefficients are set. An experiment based on IKONOS images shows that there is consistency between the theoretical analysis and the experimental results and that the MRAIM method synthesizes the images closest to those the corresponding multisensors would observe at the high- resolution level. Image Processing 2013 47 A New DCT- based Multiresolution Method for Simultaneous Denoising and Fusion of SAR Images Individual multiresolution techniques for separate image fusion and denoising have been widely researched. We propose a novel multiresolution Discrete Cosine Transform based method for simultaneous image denoising and fusion, demonstrating its efficacy with respect to Discrete Wavelet Transform and Dual- tree complex Wavelet Transform. We incorporate the Laplacian pyramid transform multiresolution analysis and a sliding window Discrete Cosine Transform for simultaneous denoising and fusion of the multiresolution coefficients. The impact of image denoising on the results of fusion is demonstrated and advantages of simultaneous denoising and fusion for SAR images are also presented Image Processing 2013 48 Brain Segmentation using Fuzzy C means clustering to detect tumour Region Tumor Segmentation from MRI data is an important but time consuming manual task performed by medical experts. The research which addresses the diseases of the brain in the field of the vision by computer is one of the challenges in recent times in medicine, the engineers and researchers recently launched challenges to carryout innovations of technology pointed in imagery. This paper focuses on a new algorithm for brain segmentation of MRI images by fuzzy C means algorithm to diagnose accurately the region of cancer. In the first step it proceeds by nioise filtering later applying FCM algorithm to segment only tumor area. In this research multiple MRI images of brain can be applied detection of glioma (tumor) growth by advanced diameter technique Image Processing 2013
- 11. 49 Comprehensive and Comparative Study of Image Fusion Techniques Image Fusion is one of the major research fields in image processing. Image Fusion is a process of combining the relevant information from a set of images, into a single image, wherein the resultant fused image will be more informative and complete than any of the input images. Image fusion process can be defined as the integration of information from a number of registered images without the introduction of distortion. It is often not possible to get an image that contains all relevant objects in focus. One way to overcome this problem is image fusion, in which one can acquire a series of pictures with different focus settings and fuse them to produce an image with extended depth of field. Image fusion techniques can improve the quality and increase the application of these data. This paper discusses the three categories of image fusion algorithms – the basic fusion algorithms, the pyramid based algorithms and the basic DWT algorithms. It gives a literature review on some of the existing image fusion techniques for image fusion like, primitive fusion (Averaging Method, Select Maximum, and Select Minimum), Discrete Wavelet transform based fusion, Principal component analysis (PCA) based fusion etc. The purpose of the paper is to elaborate wide range of algorithms their comparative study together. There are many techniques proposed by different authors in order to fuse the images and produce the clear visual of the image. Hierarchical multiscale and multiresolution image processing techniques, pyramid decomposition are the basis for the majority of image fusion algorithms. All these available techniques are designed for particular kind of images. Until now, of highest relevance for remote sensing data processing and analysis have been techniques for pixel level image fusion for which many different methods have been developed and a rich theory exists. Researchers have shown that fusion techniques that operate on such features in the transform domain yield subjectively better fused images than pixel based techniques. For this purpose, feature based fusion techniques that are usually based on empirical or heuristic rules are employed. Because a general theory is lacking fusion, algorithms are usually developed for certain applications and datasets. To implement the pixel level fusion, arithmetic operations are widely used in time domain and frequency transformations are used in frequency domain. In many applications area of navigation guidance, object detection and recognition, medical diagnosis, satellite imaging for remote sensing, rob vision, military and civilian surveillance, etc., the image fusion plays an important role. It also provides survey about some of the various existing techniques applied for image fusion and comparative study of all the techniques concludes the better approach for its future research Image Processing 2013 50 Efficient image compression technique using full, column and row transforms on colour image This paper presents image compression technique based on column transform, row transform and full transform of an image. Different transforms like, DFT, DCT, Walsh, Haar, DST, Kekre’s Transform and Slant transform are applied on colour images of size 256x256x8 by separating R, G, and B colour planes. These transforms are applied in three different ways namely: column, row and full transform. From each transformed image, specific number of low energy coefficients is eliminated and compressed images are reconstructed by applying inverse transform. Root Mean Square Error (RMSE) between original image and compressed image is calculated in each case. From the implementation of proposed technique it has been observed that, RMSE values and visual quality of images obtained by column transform are closer to RMSE values given by full transform of images. Row transform gives quite high RMSE values as compared to column and full transform at higher compression ratio. Aim of the proposed technique is to achieve compression with acceptable image quality and lesser computations by using column transform. Image Processing 2013 51 Grading of rice grains by image processing The purpose of this paper is grading of rice grains by image processing technique. Commercially the grading of rice is done according to the size of the grain kernel (full, half or broken). The food grain types and their quality are rapidly assessed through visual inspection by human inspectors. The decision making capabilities of human-inspectors are subjected to external influences such as fatigue, vengeance, bias etc. with the help of image processing we can overcome that. By image processing we can also identify any broken grains mixed . Here we discuss the various procedures used to obtain the percentage quality of rice grains. Image Processing 2013 52 Innovative Multilevel Image Fusion Algorithm using Combination of Transform Domain and Spatial Domain Methods with Comparative Analysis of Wavelet and Curve let Transform Image fusion is widely used term in different applications namely satellite imaging, remote sensing, multifocus imaging and medical imaging. In this paper, we have implemented multi level image fusion in which fusion is carried out in two stages. Firstly, Discrete wavelet or Fast Discrete Curvelet transform is applied on both source images and secondly image fusion is carried out with either spatial domain methods like Averaging, Minimum selection, maximum selection and PCA or with Pyramid transform methods like Laplacian Pyramid transform. Further, comparative analysis of fused image obtained from both Discrete Wavelet and Fast Discrete Curvelet transform is done which proves effective image fusion using proposed Curvelet transform than Wavelet transform through enhanced visual quality of fused image and by analysis of 7 quality metrics parameters. The proposed method is very innovative which can be applied to medical and multifocus imaging applications in real time. These analyses can be useful for further research work in image fusion and also the fused image obtained using Curvelet transform can be helpful for better medical diagnosis. Image Processing 2013
- 12. 53 Multi layer information hiding -a blend of steganography and visual cryptograph This study combines the notion of both steganography [1] and visual cryptography [2]. Recently, a number of innovative algorithms have been proposed in the fields of steganography and visual cryptography with the goals of improving security, reliability, and efficiency; because there will be always new kinds of threats in the field of information hiding. Actually Steganography and visual cryptography are two sides of a coin. Visual cryptography has the problem of revealing the existence of the hidden data where as Steganography hides the existence of hidden data. Here this study is to suggest multiple layers of encryption by hiding the hidden data. Hiding the hidden data means, first encrypting the information using visual cryptography and then hide the share/s[3] into images or audio files using steganography. The proposed system can be less of draw backs and can resist towards attacks. Image Processing 2013 54 Non-destructive Quality Analysis of Indian Basmati Oryza Sativa SSP Indica (Rice) Using Image Processing The Agricultural industry on the whole is ancient so far. Quality assessment of grains is a very big challenge since time immemorial. The paper presents a solution for quality evaluation and grading of Rice industry using computer vision and image processing. In this paper basic problem of rice industry for quality assessment is defined which is traditionally done manually by human inspector. Machine vision provides one alternative for an automated, nondestructive and cost-effective technique. With the help of proposed method for solution of quality assessment via computer vision, image analysis and processing there is a high degree of quality achieved as compared to human vision inspection. This paper proposes a new method for counting the number of Oryza sativa L (rice seeds) with long seeds as well as small seeds using image processing with a high degree of quality and then quantify the same for the rice seeds based on combined measurements Image Processing 2013 55 Quality Evaluation of Rice Grains Using Morphological Methods In this paper we present an automatic evaluation method for the determination of the quality of milled rice. Among the milled rice samples the quantity of broken kernels are determined with the help of shape descriptors, and geometric features. Grains are said to be broken kernels whose lengths are75% of the grain size. This proposed method gives good results in evaluation of rice quality Image Processing 2013 56 Algorithmic Approach to Quality Analysis of India Basmathi Rice using Digital Image Processing The aim of this paper is to suggest algorithm for quality analysis of Indian Basmati Rice using image processing techniques. With the help of this algorithm, an automated software system can be made to avoid the human inspection and related drawbacks. Convenient software tools compatible with hardware platform can be selected. Analysis and Classification of rice is done visually and manually by human inspectors. The decisions taken by human inspectors may be affected by external factors like tiredness, bias, revenge or human psychological limitations. We can overcome this by using image processing techniques. Digital Image processing can classify the rice grain with speed and accuracy. Here we discuss the different parameters used for analysis of rice grains and how algorithm can be used to measure and compare them with accepted standards Image Processing 2013 57 SVD Based Image Processing Applications: State of the Art Contributions and Research Challenges Singular Value Decomposition (SVD) has recently emerged as a new paradigm for processing different types of images. SVD is an attractive algebraic transform for image processing applications. The paper proposes an experimental survey for the SVD as an efficient transform in image processing applications. Despite the well-known fact that SVD offers attractive properties in imaging, the exploring of using its properties in various image applications is currently at its infancy. Since the SVD has many attractive properties have not been utilized, this paper contributes in using these generous properties in newly image applications and gives a highly recommendation for more research challenges. In this paper, the SVD properties for images are experimentally presented to be utilized in developing new SVD-based image processing applications. The paper offers survey on the developed SVD based image applications. The paper also proposes some new contributions that were originated from SVD properties analysis in different image processing. The aim of this paper is to provide a better understanding of the SVD in image processing and identify important various applications and open research directions in this increasingly important area; SVD based image processing in the future research Image Processing 2013 58 Two-stage image denoising by principal component analysis with local pixel grouping This paper presents an efficient image denoising scheme by using principal component analysis (PCA) with local pixel grouping (LPG). For a better preservation of image local structures, a pixel and its nearest neighbors are modeled as a vector variable, whose training samples are selected from the local window by using block matching based LPG. Such an LPG procedure guarantees that only the sample blocks with similar contents are used in the local statistics calculation for PCA transform estimation, so that the image local features can be well preserved after coefficient shrinkage in the PCA domain to remove the noise. The LPG-PCA denoising procedure is iterated one more time to further improve the denoising performance, and the noise level is adaptively adjusted in the second stage. Experimental results on benchmark test images demonstrate that the LPG-PCA method achieves very competitive denoising performance, especially in image fine structure preservation, compared with state-of-the-art denoising algorithms Image Processing 2013

No public clipboards found for this slide

×
### Save the most important slides with Clipping

Clipping is a handy way to collect and organize the most important slides from a presentation. You can keep your great finds in clipboards organized around topics.

Be the first to comment