Ieee projects 2012 2013 - Digital Image Processing


Published on

ieee projects download, base paper for ieee projects, ieee projects list, ieee projects titles, ieee projects for cse, ieee projects on networking,ieee projects 2012, ieee projects 2013, final year project, computer science final year projects, final year projects for information technology, ieee final year projects, final year students projects, students projects in java, students projects download, students projects in java with source code, students projects architecture, free ieee papers

Published in: Technology
No Downloads
Total Views
On Slideshare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Ieee projects 2012 2013 - Digital Image Processing

  1. 1. Elysium Technologies Private Limited Approved by ISO 9001:2008 and AICTE for SKP Training Singapore | Madurai | Trichy | Coimbatore | Cochin | Kollam | Chennai, IEEE FINAL YEAR PROJECTS 2012 – 2013 DIGITAL IMAGE PROCESSINGCorporate Office: Madurai 227-230, Church road, Anna nagar, Madurai – 625 020. 0452 – 4390702, 4392702, +9199447933980 Email:, Website: www.elysiumtechnologies.comBranch Office: Trichy 15, III Floor, SI Towers, Melapudur main road, Trichy – 620 001. 0431 – 4002234, +919790464324. Email:, Website: www.elysiumtechnologies.comBranch Office: Coimbatore 577/4, DB Road, RS Puram, Opp to KFC, Coimbatore – 641 002. +919677751577 Website:, Email: info@elysiumtechnologies.comBranch Office: Kollam Surya Complex, Vendor junction, Kollam – 691 010, Kerala. 0474 – 2723622, +919446505482. Email: Website: www.elysiumtechnologies.comBranch Office: Cochin 4th Floor, Anjali Complex, near south over bridge, Valanjambalam, Cochin – 682 016, Kerala. 0484 – 6006002, +917736004002. Email:, Website: IEEE Final Year Projects 2012 |Student Projects | Digital Image Processing Projects
  2. 2. Elysium Technologies Private Limited Approved by ISO 9001:2008 and AICTE for SKP Training Singapore | Madurai | Trichy | Coimbatore | Cochin | Kollam | Chennai, DIGITAL MAGE PROCESSING 2012 - 2013EGC1301 A Bayesian Theoretic Approach to Multiscale Complex-Phase-Order Representations This paper explores a Bayesian theoretic approach to constructing multiscale complex-phase-order representations. We formulate the construction of complex-phase-order representations at different structural scales based on the scale- space theory. Linear and nonlinear deterministic approaches are explored, and a Bayesian theoretic approach is introduced for constructing representations in such a way that strong structure localization and noise resilience are achieved. Experiments illustrate its potential for constructing robust multiscale complex-phase-order representations with well-localized structures across all scales under high-noise situations. An illustrative example of applications of the proposed approach is presented in the form of multimodal image registration and feature extraction. EGC A Coarse-to-Fine Subpixel Registration Method to Recover Local Perspective 1302 Deformation in the Application of Image In this paper, a coarse-to-fine framework is proposed to register accurately the local regions of interest (ROIs) of images with independent perspective motions by estimating their deformation parameters. A coarse registration approach based on control points (CPs) is presented to obtain the initial perspective parameters. This approach exploits two constraints to solve the problem with a very limited number of CPs. One is named the point-point-line topology constraint, and the other is named the color and intensity distribution of segment constraint. Both of the constraints describe the consistency between the reference and sensed images. To obtain a finer registration, we have converted the perspective deformation into affine deformations in local image patches so that affine refinements can be used readily. Then, the local affine parameters that have been refined are utilized to recover precise perspective parameters of a ROI. Moreover, the location and dimension selections of local image patches are discussed by mathematical demonstrations to avoid the aperture effect. Experiments on simulated data and real-world sequences demonstrate the accuracy and the robustness of the proposed method. The experimental results of image super-resolution are also provided, which show a possible practical application of our method.EGC A Coding-Cost Framework for Super-Resolution Motion Layer Decomposition1303 We consider the problem of decomposing a video sequence into a superposition of (a given number of) moving layers. For this problem, we propose an energy minimization approach based on the coding cost. Our contributions affect both the model (what is minimized) and the algorithmic side (how it is minimized). The novelty of the coding-cost model is the inclusion of a refined model of the image formation process, known as super resolution. This accounts for camera blur IEEE Final Year Projects 2012 |Student Projects | Digital Image Processing Projects
  3. 3. Elysium Technologies Private Limited Approved by ISO 9001:2008 and AICTE for SKP Training Singapore | Madurai | Trichy | Coimbatore | Cochin | Kollam | Chennai, and area averaging arising in a physically plausible image formation process. It allows us to extract sharp high- resolution layers from the video sequence. The algorithmic framework is based on an alternating minimization scheme and includes the following innovations. (1) A video labeling, we optimize the layer domains. This allows to regularize the shapes of the layers and a very elegant handling of occlusions. (2) We present an efficient parallel algorithm for extracting super-resolved layers based on TV filtering.EGC A Compressive Sensing and Unmixing Scheme for Hyper spectral Data Processing1304 Hyper spectral data processing typically demands enormous computational resources in terms of storage, computation, and input/output throughputs, particularly when real-time processing is desired. In this paper, a proof-of-concept study is conducted on compressive sensing (CS) and unmixing for hyper spectral imaging. Specifically, we investigate a low- complexity scheme for hyper spectral data compression and reconstruction. In this scheme, compressed hyper spectral data are acquired directly by a device similar to the single-pixel camera based on the principle of CS. To decode the compressed data, we propose a numerical procedure to compute directly the unmixed abundance fractions of given end members, completely bypassing high-complexity tasks involving the hyper spectral data cube itself. The reconstruction model is to minimize the total variation of the abundance fractions subject to a preprocessed fidelity equation with a significantly reduced size and other side constraints. An augmented Lagrangian-type algorithm is developed to solve this model. We conduct extensive numerical experiments to demonstrate the feasibility and efficiency of the proposed approach, using both synthetic data and hardware-measured data. Experimental and computational evidences obtained from this paper indicate that the proposed scheme has a high potential in real-world applications.EGC A Co-Saliency Model of Image Pairs1305 In this paper, we introduce a method to detect co-saliency from an image pair that may have some objects in common. The co-saliency is modeled as a linear combination of the single-image saliency map (SISM) and the multi-image saliency map (MISM). The first term is designed to describe the local attention, which is computed by using three saliency detection techniques available in literature. To compute the MISM, a co-multilayer graph is constructed by dividing the image pair into a spatial pyramid representation. Each node in the graph is described by two types of visual descriptors, which are extracted from a representation of some aspects of local appearance, e.g., color and texture properties. In order to evaluate the similarity between two nodes, we employ a normalized single-pair SimRank algorithm to compute the similarity score. Experimental evaluation on a number of image pairs demonstrates the good performance of the proposed method on the co-saliency detection task.EGC1306 A Fast Majorize–Minimize Algorithm for the Recovery of Sparse and Low-Rank Matrices IEEE Final Year Projects 2012 |Student Projects | Digital Image Processing Projects
  4. 4. Elysium Technologies Private Limited Approved by ISO 9001:2008 and AICTE for SKP Training Singapore | Madurai | Trichy | Coimbatore | Cochin | Kollam | Chennai, We introduce a novel algorithm to recover sparse and low-rank matrices from noisy and undersampled measurements. We pose the reconstruction as an optimization problem, where we minimize a linear combination of data consistency error, nonconvex spectral penalty, and nonconvex sparsity penalty. We majorize the nondifferentiable spectral and sparsity penalties in the criterion by quadratic expressions to realize an iterative three-step alternating minimization scheme. Since each of these steps can be evaluated either analytically or using fast schemes, we obtain a computationally efficient algorithm. We demonstrate the utility of the algorithm in the context of dynamic magnetic resonance imaging (MRI) reconstruction from sub-Nyquist sampled measurements. The results show a significant improvement in signal-to-noise ratio and image quality compared with classical dynamic imaging algorithms. We expect the proposed scheme to be useful in a range of applications including video restoration and multidimensional MRIEGC A Kalman-Filtering Approach to High Dynamic Range Imaging for Measurement1307 Applications High dynamic range imaging (HDRI) methods in computational photography address situations where the dynamic range of a scene exceeds what can be captured by an image sensor in a single exposure. HDRI techniques have also been used to construct radiance maps in measurement applications; unfortunately, the design and evaluation of HDRI algorithms for use in these applications have received little attention. In this paper, we develop a novel HDRI technique based on pixel-by-pixel Kalman filtering and evaluate its performance using objective metrics that this paper also introduces. In the presented experiments, this new technique achieves as much as 9.4-dB improvement in signal-to- noise ratio and can achieve as much as a 29% improvement in radiometric accuracy over a classic method. EGC A Primal–Dual Method for Total-Variation-Based Wavelet Domain Inpainting 1308 Loss of information in a wavelet domain can occur during storage or transmission when the images are formatted and stored in terms of wavelet coefficients. This calls for image inpainting in wavelet domains. In this paper, a variational approach is used to formulate the reconstruction problem. We propose a simple but very efficient iterative scheme to calculate an optimal solution and prove its convergence. Numerical results are presented to show the performance of the proposed algorithm.EGC1309 A Psycho visual Quality Metric in Free-Energy Principle In this paper, we propose a new psycho visual quality metric of images based on recent developments in brain theory and neuroscience, particularly the free-energy principle. The perception and understanding of an image is modeled as an active inference process, in which the brain tries to explain the scene using an internal generative model. The psychovisual quality is thus closely related to how accurately visual sensory data can be explained by the generative model, and the upper bound of the discrepancy between the image signal and its best internal description is given by IEEE Final Year Projects 2012 |Student Projects | Digital Image Processing Projects
  5. 5. Elysium Technologies Private Limited Approved by ISO 9001:2008 and AICTE for SKP Training Singapore | Madurai | Trichy | Coimbatore | Cochin | Kollam | Chennai, the free energy of the cognition process. Therefore, the perceptual quality of an image can be quantified using the free energy. Constructively, we develop a reduced-reference free-energy-based distortion metric (FEDM) and a no-reference free-energy-based quality metric (NFEQM). The FEDM and the NFEQM are nearly invariant to many global systematic deviations in geometry and illumination that hardly affect visual quality, for which existing image quality metrics wrongly predict severe quality degradation. Although with very limited or even without information on the reference image, the FEDM and the NFEQM are highly competitive compared with the full-reference SSIM image quality metric on images in the popular LIVE database. Moreover, FEDM and NFEQM can measure correctly the visual quality of some model-based image processing algorithms, for which the competing metrics often contradict with viewers opinions.EGC A Secret-Sharing-Based Method for Authentication of Grayscale Document Images via1310 the Use of the PNG Image with a Data Repair Capability A new blind authentication method based on the secret sharing technique with a data repair capability for grayscale document images via the use of the Portable Network Graphics (PNG) image is proposed. An authentication signal is generated for each block of a grayscale document image, which, together with the binarized block content, is transformed into several shares using the Shamir secret sharing scheme. The involved parameters are carefully chosen so that as many shares as possible are generated and embedded into an alpha channel plane. The alpha channel plane is then combined with the original grayscale image to form a PNG image. During the embedding process, the computed share values are mapped into a range of alpha channel values near their maximum value of 255 to yield a transparent stego-image with a disguise effect. In the process of image authentication, an image block is marked as tampered if the authentication signal computed from the current block content does not match that extracted from the shares embedded in the alpha channel plane. Data repairing is then applied to each tampered block by a reverse Shamir scheme after collecting two shares from unmarked blocks. Measures for protecting the security of the data hidden in the alpha channel are also proposed. Good experimental results prove the effectiveness of the proposed method for real applications.EGC A Semi transparency-Based Optical-Flow Method with a Point Trajectory Model for1311 Particle-Like Video This paper proposes a new semitransparency-based optical-flow model with a point trajectory (PT) model for particle- like video. Previous optical-flow models have used ranging from image brightness constancy to image brightness change models as constraints. However, two important issues remain unsolved. The first is how to track/match a semitransparent object with a very large displacement between frames. Such moving objects with different shapes and sizes in an outdoor scene move against a complicated background. Second, due to semitransparency, the image intensity between frames can also violate a previous image brightness-based optical-flow model. Thus, we propose a two-step optimization for the optical-flow estimation model for a moving semitransparent object, i.e., particle. In the first step, a rough optical flow between particles is estimated by a new alpha constancy constraint that is based on an image generation model of semitransparency. In the second step, the optical flow of a particle with a continuous trajectory in a IEEE Final Year Projects 2012 |Student Projects | Digital Image Processing Projects
  6. 6. Elysium Technologies Private Limited Approved by ISO 9001:2008 and AICTE for SKP Training Singapore | Madurai | Trichy | Coimbatore | Cochin | Kollam | Chennai, definite temporal interval based on a PT model can be refined. Many experiments using various falling-snow and foggy scenes with multiple moving vehicles show the significant improvement of the optical flow compared with a previous optical-flow model.EGC A Spectral and Spatial Measure of Local Perceived Sharpness in Natural Images1312 This paper presents an algorithm designed to measure the local perceived sharpness in an image. Our method utilizes both spectral and spatial properties of the image: For each block, we measure the slope of the magnitude spectrum and the total spatial variation. These measures are then adjusted to account for visual perception, and then, the adjusted measures are combined via a weighted geometric mean. The resulting measure, i.e., S3 (spectral and spatial sharpness), yields a perceived sharpness map in which greater values denote perceptually sharper regions. This map can be collapsed into a single index, which quantifies the overall perceived sharpness of the whole image. We demonstrate the utility of the S3 measure for within-image and across-image sharpness prediction, no-reference image quality assessment of blurred images, and monotonic estimation of the standard deviation of the impulse response used in Gaussian blurring. We further evaluate the accuracy of S3 in local sharpness estimation by comparing S3 maps to sharpness maps generated by human subjects. We show that S3 can generate sharpness maps, which are highly correlated with the human-subject maps.EGC A Statistical Method for 2-D Facial Land marking1313 Despite years of research, the name ambiguity problem remains largely unresolved. Outstanding issues include how to A Many facial-analysis approaches rely on robust and accurate automatic facial land marking to correctly function. In this paper, we describe a statistical method for automatic facial-landmark localization. Our land marking relies on a parsimonious mixture model of Gabor wavelet features, computed in coarse-to-fine fashion and complemented with a shape prior. We assess the accuracy and the robustness of the proposed approach in extensive cross-database conditions conducted on four face data sets (Face Recognition Grand Challenge, Cohn-Kanade, Bosphorus, and BioID). Our method has 99.33% accuracy on the Bosphorus database and 97.62% accuracy on the BioID database on the average, which improves the state of the art. We show that the method is not significantly affected by low-resolution images, small rotations, facial expressions, and natural occlusions such as beard and mustache. We further test the goodness of the landmarks in a facial expression recognition application and report land marking-induced improvement over baseline on two separate databases for video-based expression recognition (Cohn-Kanade and BU-4DFE). EGC A Surface-Based 3-D Dendritic Spine Detection Approach from Confocal Microscopy 1314 Images IEEE Final Year Projects 2012 |Student Projects | Digital Image Processing Projects
  7. 7. Elysium Technologies Private Limited Approved by ISO 9001:2008 and AICTE for SKP Training Singapore | Madurai | Trichy | Coimbatore | Cochin | Kollam | Chennai, Determining the relationship between the dendritic spine morphology and its functional properties is a fundamental challenge in neurobiology research. In particular, how to accurately and automatically analyse meaningful structural information from a large microscopy image data set is far away from being resolved. As pointed out in existing literature, one remaining challenge in spine detection and segmentation is how to automatically separate touching spines. In this paper, based on various global and local geometric features of the dendrite structure, we propose a novel approach to detect and segment neuronal spines, in particular, a breaking-down and stitching-up algorithm to accurately separate touching spines. Extensive performance comparisons show that our approach is more accurate and robust than two state-of-the-art spine detection and segmentation algorithms.EGC A Symmetric Motion Estimation Method for Motion-Compensated Frame Interpolation1315 This paper proposes a new motion-compensated frame interpolation (MCFI) method. The proposed method utilizes a symmetric motion estimation (SME) method, which is a new pixel-wise motion estimation method for intermediate frame interpolation. By using an adaptive search range for the motion estimation, the proposed method can obtain a more reliable motion vector for each pixel than previous MCFI methods that use a conventional block matching algorithm (BMA). In addition, we propose a combined method of the SME and BMA to reduce the computation time of the pixel- wise motion estimation method. The experimental results show that the proposed method outperforms other MCFI methods in terms of generating objectively and subjectively better interpolated frames.EGC A Unified Spectral-Domain Approach for Saliency Detection and Its Application to1316 Automatic Object Segmentation A In this paper, a visual attention model is incorporated for efficient saliency detection, and the salient regions are employed as object seeds for our automatic object segmentation system. In contrast with existing interactive segmentation approaches that require considerable user interaction, the proposed method does not require it, i.e., the segmentation task is fulfilled in a fully automatic manner. First, we introduce a novel unified spectral-domain approach for saliency detection. Our visual attention model originates from a well-known property of the human visual system that the human visual perception is highly adaptive and sensitive to structural information in images rather than nonstructural information. Then, based on the saliency map, we propose an iterative self-adaptive segmentation framework for more accurate object segmentation. Extensive tests on a variety of cluttered natural images show that the proposed algorithm is an efficient indicator for characterizing the human perception and it can provide satisfying segmentation performance.EGC A Uniform Grid Structure to Speed up Example-Based Photometric Stereo1317 IEEE Final Year Projects 2012 |Student Projects | Digital Image Processing Projects
  8. 8. Elysium Technologies Private Limited Approved by ISO 9001:2008 and AICTE for SKP Training Singapore | Madurai | Trichy | Coimbatore | Cochin | Kollam | Chennai, In this paper, we describe a data structure and an algorithm to accelerate the table lookup step in example-based multiimage photometric stereo. In that step, one must find a pixel of a reference object, of known shape and color, whose appearance under different illumination fields is similar to that of a given scene pixel. This search reduces to finding the closest match to a given -vector in a table with a thousand or more -vectors. Our method is faster than previously known solutions for this problem but, unlike some of them, is exact, i.e., always yields the best matching entry in the table, and does not assume point-like sources. Our solution exploits the fact that the table is in fact a fairly flat 2-D manifold in -dimensional space so that the search can be efficiently solved with a uniform 2-D grid structure.EGC Abrupt Motion Tracking Via Intensively Adaptive Markov-Chain Monte Carlo Sampling1318 The robust tracking of abrupt motion is a challenging task in computer vision due to its large motion uncertainty. While various particle filters and conventional Markov-chain Monte Carlo (MCMC) methods have been proposed for visual tracking, these methods often suffer from the well-known local-trap problem or from poor convergence rate. In this paper, we propose a novel sampling-based tracking scheme for the abrupt motion problem in the Bayesian filtering framework. To effectively handle the local-trap problem, we first introduce the stochastic approximation Monte Carlo (SAMC) sampling method into the Bayesian filter tracking framework, in which the filtering distribution is adaptively estimated as the sampling proceeds, and thus, a good approximation to the target distribution is achieved. In addition, we propose a new MCMC sampler with intensive adaptation to further improve the sampling efficiency, which combines a density-grid-based predictive model with the SAMC sampling, to give a proposal adaptation scheme. The proposed method is effective and computationally efficient in addressing the abrupt motion problem. We compare our approach with several alternative tracking algorithms, and extensive experimental results are presented to demonstrate the effectiveness and the efficiency of the proposed method in dealing with various types of abrupt motions.EGC Adaptive Perona–Malik Model Based on the Variable Exponent for Image Denoising1319 This paper introduces a class of adaptive Perona-Malik (PM) diffusion, which combines the PM equation with the heat equation. The PM equation provides a potential algorithm for image segmentation, noise removal, edge detection, and image enhancement. However, the defect of traditional PM model is tending to cause the staircase effect and create new features in the processed image. Utilizing the edge indicator as a variable exponent, we can adaptively control the diffusion mode, which alternates between PM diffusion and Gaussian smoothing in accordance with the image feature. Computer experiments indicate that the present algorithm is very efficient for edge detection and noise removal.EGC Algorithms for the Digital Restoration of Torn Films1320 IEEE Final Year Projects 2012 |Student Projects | Digital Image Processing Projects
  9. 9. Elysium Technologies Private Limited Approved by ISO 9001:2008 and AICTE for SKP Training Singapore | Madurai | Trichy | Coimbatore | Cochin | Kollam | Chennai, This paper presents algorithms for the digital restoration of films damaged by tear. As well as causing local image data loss, a tear results in a noticeable relative shift in the frame between the regions at either side of the tear boundary. This paper describes a method for delineating the tear boundary and for correcting the displacement. This is achieved using a graph-cut segmentation framework that can be either automatic or interactive when automatic segmentation is not possible. Using temporal intensity differences to form the boundary conditions for the segmentation facilitates the robust division of the frame. The resulting segmentation map is used to calculate and correct the relative displacement using a global-motion estimation approach based on motion histograms. A high-quality restoration is obtained when a suitable missing-data treatment algorithm is used to recover any missing pixel intensities.EGC An Algorithm for Power Line Detection and Warning Based on a Millimeter-Wave Radar1321 Video Power-line-strike accident is a major safety threat for low-flying aircrafts such as helicopters, thus an automatic warning system to power lines is highly desirable. In this paper we propose an algorithm for detecting power lines from radar videos from an active millimeter-wave sensor. Hough Transform is employed to detect candidate lines. The major challenge is that the radar videos are very noisy due to ground return. The noise points could fall on the same line which results in signal peaks after Hough Transform similar to the actual cable lines. To differentiate the cable lines from the noise lines, we train a Support Vector Machine to perform the classification. We exploit the Bragg pattern, which is due to the diffraction of electromagnetic wave on the periodic surface of power lines. We propose a set of features to represent the Bragg pattern for the classifier. We also propose a slice-processing algorithm which supports parallel processing, and improves the detection of cables in a cluttered background. Lastly, an adaptive algorithm is proposed to integrate the detection results from individual frames into a reliable video detection decision, in which temporal correlation of the cable pattern across frames is used to make the detection more robust. Extensive experiments with real-world data validated the effectiveness of our cable detection algorithm.EGC An Algorithm for the Contextual Adaption of SURF Octave Selection with Good1322 Matching Performance: Best Octaves Speeded-Up Robust Features is a feature extraction algorithm designed for real-time execution, although this is rarely achievable on low-power hardware such as that in mobile robots. One way to reduce the computation is to discard some of the scale-space octaves, and previous research has simply discarded the higher octaves. This paper shows that this approach is not always the most sensible and presents an algorithm for choosing which octaves to discard based on the properties of the imagery. Results obtained with this best octaves algorithm show that it is able to achieve a significant reduction in computation without compromising matching performance.EGC An Alternating Minimization Algorithm for Binary Image Restoration1323 IEEE Final Year Projects 2012 |Student Projects | Digital Image Processing Projects
  10. 10. Elysium Technologies Private Limited Approved by ISO 9001:2008 and AICTE for SKP Training Singapore | Madurai | Trichy | Coimbatore | Cochin | Kollam | Chennai, The problem we will consider in this paper is binary image restoration. It is, in essence, difficult to solve because of the combinatorial nature of the problem. To overcome this difficulty, we propose a new minimization model by making use of a new variable to enforce the image to be binary. Based on the proposed minimization model, we present a fast alternating minimization algorithm for binary image restoration. We prove the convergence of the proposed alternating minimization algorithm. Experimental results show that the proposed method is feasible and effective for binary image restoration.EGC An Efficient Camera Calibration Technique Offering Robustness and Accuracy over a1324 Wide Range of Lens Distortion In the field of machine vision, camera calibration refers to the experimental determination of a set of parameters that describe the image formation process for a given analytical model of the machine vision system. Researchers working with low-cost digital cameras and off-the-shelf lenses generally favor camera calibration techniques that do not rely on specialized optical equipment, modifications to the hardware, or an a priori knowledge of the vision system. Most of the commonly used calibration techniques are based on the observation of a single 3-D target or multiple planar (2-D) targets with a large number of control points. This paper presents a novel calibration technique that offers improved accuracy, robustness, and efficiency over a wide range of lens distortion. This technique operates by minimizing the error between the reconstructed image points and their experimentally determined counterparts in ―distortion free‖ space. This facilitates the incorporation of the exact lens distortion model. In addition, expressing spatial orientation in terms of unit quaternions greatly enhances the proposed calibration solution by formulating a minimally redundant system of equations that is free of singularities. Extensive performance benchmarking consisting of both computer simulation and experiments confirmed higher accuracy in calibration regardless of the amount of lens distortion present in the optics of the camera. This paper also experimentally confirmed that a comprehensive lens distortion model including higher order radial and tangential distortion terms improves calibration accuracy.EGC1325 An Efficient Selective Perceptual-Based Super-Resolution Estimator Ab initio protein structure prediction methods first generate large sets of structural conformations as candidates (called decoys), and then select the most representative decoys through clustering techniques. Classical clustering methods are inefficient due to the pairwise distance calculation, and thus become infeasible when the number of decoys is large. In addition, the existing clustering approaches suffer from the arbitrariness in determining a distance threshold for proteins within a cluster: a small distance threshold leads to many small clusters, while a large distance threshold results in the merging of several independent clusters into one cluster. In this paper, we propose an efficient clustering method through fast estimating cluster centroids and efficient pruning rotation spaces. The number of clusters is automatically detected by information distance criteria. A package named ONION, which can be downloaded freely, is IEEE Final Year Projects 2012 |Student Projects | Digital Image Processing Projects
  11. 11. Elysium Technologies Private Limited Approved by ISO 9001:2008 and AICTE for SKP Training Singapore | Madurai | Trichy | Coimbatore | Cochin | Kollam | Chennai, implemented accordingly. Experimental results on benchmark data sets suggest that ONION is 14 times faster than existing tools, and ONION obtains better selections for 31 targets, and worse selection for 19 targets compared to SPICKERs selections. On an average PC, ONION can cluster 100,000 decoys in around 12 minutes. EGC An Energy-Based Model for the Image Edge-Histogram Specification Problem 1326 In this correspondence, we present an original energy-based model that achieves the edge-histogram specification of a real input image and thus extends the exact specification method of the image luminance (or gray level) distribution recently proposed by Coltuc et al. Our edge-histogram specification approach is stated as an optimization problem in which each edge of a real input image will tend iteratively toward some specified gradient magnitude values given by a target edge distribution (or a normalized edge histogram possibly estimated from a target image). To this end, a hybrid optimization scheme combining a global and deterministic conjugate-gradient-based procedure and a local stochastic search using the Metropolis criterion is proposed herein to find a reliable solution to our energy-based model. Experimental results are presented, and several applications follow from this procedure. EGC An Investigation of Dehazing Effects on Image and Video Coding 1327 Split networks are commonly used to visualize collections of bipartitions, also called splits, of a finite set. Such collections arise, for example, in evolutionary studies. Split networks can be viewed as a generalization of phylogenetic trees and may be generated using the SplitsTree package. Recently, the NeighborNet method for generating split networks has become rather popular, in part because it is guaranteed to always generate a circular split system, which can always be displayed by a planar split network. Even so, labels must be placed on the "outside‖ of the network, which might be problematic in some applications. To help circumvent this problem, it can be helpful to consider so- called flat split systems, which can be displayed by planar split networks where labels are allowed on the inside of the network too. Here, we present a new algorithm that is guaranteed to compute a minimal planar split network displaying a flat split system in polynomial time, provided the split system is given in a certain format. We will also briefly discuss two heuristics that could be useful for analyzing phylogeographic data and that allow the computation of flat split systems in this format in polynomial time.EGC An Online Learning Approach to Occlusion Boundary Detection1328 We propose a novel online learning-based framework for occlusion boundary detection in video sequences. This approach does not require any prior training and instead ―learns‖ occlusion boundaries by updating a set of weights for the online learning Hedge algorithm at each frame instance. Whereas previous training-based methods perform well only on data similar to the trained examples, the proposed method is well suited for any video sequence. We demonstrate the IEEE Final Year Projects 2012 |Student Projects | Digital Image Processing Projects
  12. 12. Elysium Technologies Private Limited Approved by ISO 9001:2008 and AICTE for SKP Training Singapore | Madurai | Trichy | Coimbatore | Cochin | Kollam | Chennai, performance of the proposed detector both for the CMU data set, which includes hand-labeled occlusion boundaries, and for a novel video sequence. In addition to occlusion boundary detection, the proposed algorithm is capable of classifying occlusion boundaries by angle and by whether the occluding object is covering or uncovering the background.EGC Analyzing Image Deblurring Through Three Paradigms1329 To recover a sharp version from a blurred image is a long-standing inverse problem. In this paper, we analyze the research on this topic both theoretically and experimentally through three paradigms: 1) the deterministic filter; 2) Bayesian estimation; and 3) the conjunctive deblurring algorithm (CODA), which performs the deterministic filter and Bayesian estimation in a conjunctive manner. We point out the weaknesses of the deterministic filter and unify the limitation latent in two kinds of Bayesian estimators. We further explain why the CODA is able to handle quite large blurs beyond Bayesian estimation. Finally, we propose a novel method to overcome several unreported limitations of the CODA. Although extensive experiments demonstrate that our method outperforms state-of-the-art methods with a large margin, some common problems of image deblurring still remain unsolved and should attract further research efforts.EGC Anomaly Detection and Reconstruction from Random Projections1330 Compressed-sensing methodology typically employs random projections simultaneously with signal acquisition to accomplish dimensionality reduction within a sensor device. The effect of such random projections on the preservation of anomalous data is investigated. The popular RX anomaly detector is derived for the case in which global anomalies are to be identified directly in the random-projection domain, and it is determined via both random simulation, as well as empirical observation that strongly anomalous vectors are likely to be identifiable by the projection-domain RX detector even in low-dimensional projections. Finally, a reconstruction procedure for hyper spectral imagery is developed wherein projection-domain anomaly detection is employed to partition the data set, permitting anomaly and normal pixel classes to be separately reconstructed in order to improve the representation of the anomaly pixels.EGC Assemble New Object Detector with Few Examples1331 Learning a satisfactory object detector generally requires sufficient training data to cover the most variations of the object. In this paper, we show that the performance of object detector is severely degraded when training examples are limited. We propose an approach to handle this issue by exploring a set of pretrained auxiliary detectors for other categories. By mining the global and local relationships between the target object category and auxiliary objects, a robust detector can be learned with very few training examples. We adopt the deformable part model proposed by Felzenszwalb and simultaneously explore the root and part filters in the auxiliary object detectors under the guidance of the few training examples from the target object category. An iterative solution is introduced for such a process. The IEEE Final Year Projects 2012 |Student Projects | Digital Image Processing Projects
  13. 13. Elysium Technologies Private Limited Approved by ISO 9001:2008 and AICTE for SKP Training Singapore | Madurai | Trichy | Coimbatore | Cochin | Kollam | Chennai, extensive experiments on the PASCAL VOC 2007 challenge data set show the encouraging performance of the new detector assembled from those related auxiliary detectors.EGC Automatic Bootstrapping and Tracking of Object Contours1332 A new fully automatic object tracking and segmentation framework is proposed. The framework consists of a motion- based bootstrapping algorithm concurrent to a shape-based active contour. The shape-based active contour uses finite shape memory that is automatically and continuously built from both the bootstrap process and the active-contour object tracker. A scheme is proposed to ensure that the finite shape memory is continuously updated but forgets unnecessary information. Two new ways of automatically extracting shape information from image data given a region of interest are also proposed. Results demonstrate that the bootstrapping stage provides important motion and shape information to the object tracker. This information is found to be essential for good (fully automatic) initialization of the active contour. Further results also demonstrate convergence properties of the content of the finite shape memory and similar object tracking performance in comparison with an object tracker with unlimited shape memory. Tests with an active contour using a fixed-shape prior also demonstrate superior performance for the proposed bootstrapped finite- shape-memory framework and similar performance when compared with a recently proposed active contour that uses an alternative online learning model.EGC Automatic Image Equalization and Contrast Enhancement Using Gaussian Mixture1333 Modeling This paper addresses the automatic image segmentation problem in a region merging style. With an initially over segmented image, in which many regions (or super pixels) with homogeneous color are detected, an image segmentation is performed by iteratively merging the regions according to a statistical test. There are two essential issues in a region-merging algorithm: order of merging and the stopping criterion. In the proposed algorithm, these two issues are solved by a novel predicate, which is defined by the sequential probability ratio test and the minimal cost criterion. Starting from an over segmented image, neighboring regions are progressively merged if there is an evidence for merging according to this predicate. We show that the merging order follows the principle of dynamic programming. This formulates the image segmentation as an inference problem, where the final segmentation is established based on the observed image. We also prove that the produced segmentation satisfies certain global properties. In addition, a faster algorithm is developed to accelerate the region-merging process, which maintains a nearest neighbor graph in each iteration. Experiments on real natural images are conducted to demonstrate the performance of the proposed dynamic region-merging algorithm.EGC Bayesian Estimation for Optimized Structured Illumination Microscopy1334 IEEE Final Year Projects 2012 |Student Projects | Digital Image Processing Projects
  14. 14. Elysium Technologies Private Limited Approved by ISO 9001:2008 and AICTE for SKP Training Singapore | Madurai | Trichy | Coimbatore | Cochin | Kollam | Chennai, Structured illumination microscopy is a recent imaging technique that aims at going beyond the classical optical resolution by reconstructing high-resolution (HR) images from low-resolution (LR) images acquired through modulation of the transfer function of the microscope. The classical implementation has a number of drawbacks, such as requiring a large number of images to be acquired and parameters to be manually set in an ad-hoc manner that have, until now, hampered its wide dissemination. Here, we present a new framework based on a Bayesian inverse problem formulation approach that enables the computation of one HR image from a reduced number of LR images and has no specific constraints on the modulation. Moreover, it permits to automatically estimate the optimal reconstruction hyperparameters and to compute an uncertainty bound on the estimated values. We demonstrate through numerical evaluations on simulated data and examples on real microscopy data that our approach represents a decisive advance for a wider use of HR microscopy through structured illumination.EGC1335 Bayesian Robust Principal Component Analysis A hierarchical Bayesian model is considered for decomposing a matrix into low-rank and sparse components’, assuming the observed matrix is a superposition of the two. The matrix is assumed noisy, with unknown and possibly non-stationary noise statistics. The Bayesian framework infers an approximate representation for the noise statistics while simultaneously inferring the low-rank and sparse-outlier contributions; the model is robust to a broad range of noise levels, without having to change model hyperparameter settings. In addition, the Bayesian framework allows exploitation of additional structure in the matrix. For example, in video applications each row (or column) corresponds to a video frame, and we introduce a Markov dependency between consecutive rows in the matrix (corresponding to consecutive frames in the video). The properties of this Markov process are also inferred based on the observed matrix, while simultaneously denoising and recovering the low-rank and sparse components. We compare the Bayesian model to a state-of-the-art optimization-based implementation of robust PCA; considering several examples, we demonstrate competitive performance of the proposed model.EGC Bayesian Texture Classification Based on Contourlet Transform and BYY Harmony1336 Learning of Poisson Mixtures As a newly developed 2-D extension of the wavelet transform using multiscale and directional filter banks, the contourlet transform can effectively capture the intrinsic geometric structures and smooth contours of a texture image that are the dominant features for texture classification. In this paper, we propose a novel Bayesian texture classifier based on the adaptive model-selection learning of Poisson mixtures on the contourlet features of texture images. The adaptive model- selection learning of Poisson mixtures is carried out by the recently established adaptive gradient Bayesian Ying-Yang harmony learning algorithm for Poisson mixtures. It is demonstrated by the experiments that our proposed Bayesian IEEE Final Year Projects 2012 |Student Projects | Digital Image Processing Projects
  15. 15. Elysium Technologies Private Limited Approved by ISO 9001:2008 and AICTE for SKP Training Singapore | Madurai | Trichy | Coimbatore | Cochin | Kollam | Chennai, classifier significantly improves the texture classification accuracy in comparison with several current state-of-the-art texture classification approaches.EGC1337 Binarization of Low-Quality Barcode Images Captured by Mobile Phones Using Local Window of Adaptive Location and Size It is difficult to directly apply existing binarization approaches to the barcode images captured by mobile device due to their low quality. This paper proposes a novel scheme for the binarization of such images. The barcode and background regions are differentiated by the number of edge pixels in a search window. Unlike existing approaches that center the pixel to be binarized with a window of fixed size, we propose to shift the window center to the nearest edge pixel so that the balance of the number of object and background pixels can be achieved. The window size is adaptive either to the minimum distance to edges or minimum element width in the barcode. The threshold is calculated using the statistics in the window. Our proposed method has demonstrated its capability in handling the nonuniform illumination problem and the size variation of objects. Experimental results conducted on 350 images captured by five mobile phones achieve about 100% of recognition rate in good lighting conditions, and about 95% and 83% in bad lighting conditions. Comparisons made with nine existing binarization methods demonstrate the advancement of our proposed scheme.EGC Capacity Analysis for Orthogonal Halftone Orientation Modulation Channels1338 The Halftone dot orientation modulation has recently been proposed as a method for data hiding in printed images. Extraction of data embedded with halftone orientation modulation is accomplished by computing, from the scanned hardcopy image, detection statistics that uniquely identify the embedded orientation. From a communications perspective, this data hiding setup forms an interesting class of channels with dot orientation as input and a vector of statistics as the output. This paper derives capacity expressions for these channels that allow for numerical evaluation of the capacity. Results provide significant insight for orientation modulation based print-scan resilient data hiding: the capacity varies significantly as a function of the image gray level and experimentally observed error free data rates closely mirror the variation in capacity.EGC . Color Constancy for Multiple Light Sources1339 Color constancy algorithms are generally based on the simplifying assumption that the spectral distribution of a light source is uniform across scenes. However, in reality, this assumption is often violated due to the presence of multiple light sources. In this paper, we will address more realistic scenarios where the uniform light-source assumption is too restrictive. First, a methodology is proposed to extend existing algorithms by applying color constancy locally to image patches, rather than globally to the entire image. After local (patch-based) illuminant estimation, these estimates are combined into more robust estimations, and a local correction is applied based on a modified diagonal model. IEEE Final Year Projects 2012 |Student Projects | Digital Image Processing Projects
  16. 16. Elysium Technologies Private Limited Approved by ISO 9001:2008 and AICTE for SKP Training Singapore | Madurai | Trichy | Coimbatore | Cochin | Kollam | Chennai, Quantitative and qualitative experiments on spectral and real images show that the proposed methodology reduces the influence of two light sources simultaneously present in one scene. If the chromatic difference between these two illuminants is more than 1° , the proposed framework outperforms algorithms based on the uniform light-source assumption (with error-reduction up to approximately 30%). Otherwise, when the chromatic difference is less than 1° and the scene can be considered to contain one (approximately) uniform light source, the performance of the proposed method framework is similar to global color constancy methods. EGC Combining Head Pose and Eye Location Information for Gaze Estimation 1340 Head pose and eye location for gaze estimation have been separately studied in numerous works in the literature. Previous research shows that satisfactory accuracy in head pose and eye location estimation can be achieved in constrained settings. However, in the presence of no frontal faces, eye locators are not adequate to accurately locate the center of the eyes. On the other hand, head pose estimation techniques are able to deal with these conditions; hence, they may be suited to enhance the accuracy of eye localization. Therefore, in this paper, a hybrid scheme is proposed to combine head pose and eye location information to obtain enhanced gaze estimation. To this end, the transformation matrix obtained from the head pose is used to normalize the eye regions, and in turn, the transformation matrix generated by the found eye location is used to correct the pose estimation procedure. The scheme is designed to enhance the accuracy of eye location estimations, particularly in low-resolution videos, to extend the operative range of the eye locators, and to improve the accuracy of the head pose tracker. These enhanced estimations are then combined to obtain a novel visual gaze estimation system, which uses both eye location and head information to refine the gaze estimates. From the experimental results, it can be derived that the proposed unified scheme improves the accuracy of eye estimations by 16% to 23%. Furthermore, it considerably extends its operating range by more than 15° by overcoming the problems introduced by extreme head poses. Moreover, the accuracy of the head pose tracker is improved by 12% to 24%. Finally, the experimentation on the proposed combined gaze estimation system shows that it is accurate (with a mean error between 2° and 5°) and that it can be used in cases where classic approaches would fail without imposing restraints on the position of the head. .EGC Completely Automated Multiresolution Edge Snapper—A New Technique for an Accurate1341 Carotid Ultrasound IMT Measurement: Clinical Validation and Benchmarking on a Multi- Institutional Database The aim of this paper is to describe a novel and completely automated technique for carotid artery (CA) recognition, far (distal) wall segmentation, and intima-media thickness (IMT) measurement, which is a strong clinical tool for risk assessment for cardiovascular diseases. The architecture of completely automated multiresolution edge snapper (CAMES) consists of the following two stages: 1) automated CA recognition based on a combination of scale-space and statistical classification in a multiresolution framework and 2) automated segmentation of lumen-intima (LI) and media- adventitia (MA) interfaces for the far (distal) wall and IMT measurement. Our database of 365 B-mode longitudinal carotid images is taken from four different institutions covering different ethnic backgrounds. The ground-truth (GT) database IEEE Final Year Projects 2012 |Student Projects | Digital Image Processing Projects
  17. 17. Elysium Technologies Private Limited Approved by ISO 9001:2008 and AICTE for SKP Training Singapore | Madurai | Trichy | Coimbatore | Cochin | Kollam | Chennai, was the average manual segmentation from three clinical experts. The mean distance ± standard deviation of CAMES with respect to GT profiles for LI and MA interfaces were 0.081 ± 0.099 and 0.082 ± 0.197 mm, respectively. The IMT measurement error between CAMES and GT was 0.078 ± 0.112 mm. CAMES was benchmarked against a previously developed automated technique based on an integrated approach using feature-based extraction and classifier (CALEX). Although CAMES underestimated the IMT value, it had shown a strong improvement in segmentation errors against CALEX for LI and MA interfaces by 8% and 42%, respectively. The overall IMT measurement bias for CAMES improved by 36% against CALEX. Finally, this paper demonstrated that the figure-of-merit of CAMES was 95.8% compared with 87.4% for CALEX. The combination of multiresolution CA recognition and far-wall segmentation led to an automated, low-complexity, real-time, and accurate technique for carotid IMT measurement. Validation on a multiethnic/multi- institutional data set demonstrated the robustness of the technique, which can constitute a clinically valid IMT measurement for assistance in atherosclerosis disease management.EGC1342 Computational Cameras: Convergence of Optics and Processing A computational camera uses a combination of optics and processing to produce images that cannot be captured with traditional cameras. In the last decade, computational imaging has emerged as a vibrant field of research. A wide variety of computational cameras has been demonstrated to encode more useful visual information in the captured images, as compared with conventional cameras. In this paper, we survey computational cameras from two perspectives. First, we present taxonomy of computational camera designs according to the coding approaches, including object side coding, pupil plane coding, sensor side coding, illumination coding, camera arrays and clusters, and unconventional imaging systems. Second, we use the abstract notion of light field representation as a general tool to describe computational camera designs, where each camera can be formulated as a projection of a high-dimensional light field to a 2-D image sensor. We show how individual optical devices transform light fields and use these transforms to illustrate how different computational camera designs (collections of optical devices) capture and encode useful visual information.EGC1343 Concatenated Block Codes for Unequal Error Protection of Embedded Bit Streams A state-of-the-art progressive source encoder is combined with a concatenated block coding mechanism to produce a robust source transmission system for embedded bit streams. The proposed scheme efficiently trades off the available total bit budget between information bits and parity bits through efficient information block size adjustment, concatenated block coding, and random block interleavers. The objective is to create embedded codewords such that, for a particular information block, the necessary protection is obtained via multiple channel encodings, contrary to the conventional methods that use a single code rate per information block. This way, a more flexible protection scheme is obtained. The information block size and concatenated coding rates are judiciously chosen to maximize system performance, subject to a total bit budget. The set of codes is usually created by puncturing a low-rate mother code so that a single encoder-decoder pair is used. The proposed scheme is shown to effectively enlarge this code set by providing more protection levels than is possible using the code rate set directly. At the expense of complexity, average system performance is shown to be significantly better than that of several known comparison systems, particularly at IEEE Final Year Projects 2012 |Student Projects | Digital Image Processing Projects
  18. 18. Elysium Technologies Private Limited Approved by ISO 9001:2008 and AICTE for SKP Training Singapore | Madurai | Trichy | Coimbatore | Cochin | Kollam | Chennai, higher channel bit error rates.EGC Controlling Ink-Jet Print Attributes Via Neugebauer Primary Area Coverages1344 Ink-jet print attributes such as color gamut, grain, and cost are consequences of the materials and printing technology used and of choices made during color management, color separation, and halftoning operation. Traditionally, color separation determines what amounts of the available inks to use for each reproducible color, and halftoning deals with the spatial distribution of inks that also results in the nature of their overprinting. However, using an ink space as a means of communication between color separation and halftoning gives access only to some of the printed patterns that a printing system is capable of and, therefore, only to a reduced range of print attributes. Here, a method, i.e., Halftone Area Neugebauer Separation, is proposed to gain access to all possible printable patterns by specifying relative area coverages of a printing systems Neugebauer primaries instead of only ink amounts. This results in delivering prints with more optimal attributes (e.g., using less ink and giving rise to a larger color gamut) than is possible using current methods.EGC Critically Sampled Wavelets with Composite Dilations1345 Wavelets with composite dilations provide a general framework for the construction of waveforms defined not only at various scales and locations, as traditional wavelets, but also at various orientations and with different scaling factors in each coordinate. As a result, they are useful to analyze the geometric information that often dominate multidimensional data much more efficiently than traditional wavelets. The shearlet system, for example, is a particular well-known realization of this framework, which provides optimally sparse representations of images with edges. In this paper, we further investigate the constructions derived from this approach to develop critically sampled wavelets with composite dilations for the purpose of image coding. Not only do we show that many nonredundant directional constructions recently introduced in the literature can be derived within this setting, but we also introduce new critically sampled discrete transforms that achieve much better nonlinear approximation rates than traditional discrete wavelet transforms and outperform the other critically sampled multiscale transforms recently proposed. demonstrate that junction tree inference substantially improves rates of convergence compared to existing methods.EGC CT Reconstruction from Parallel and Fan-Beam Projections by a 2-D Discrete Radon1346 Transform The discrete Radon transform (DRT) was defined by Abervuch et al. as an analog of the continuous Radon transform for discrete data. Both the DRT and its inverse are computable in O(n2logn) operations for images of size n ×n. In this paper, we demonstrate the applicability of the inverse DRT for the reconstruction of a 2-D object from its continuous projections. The DRT and its inverse are shown to model accurately the continuum as the number of samples increases. IEEE Final Year Projects 2012 |Student Projects | Digital Image Processing Projects
  19. 19. Elysium Technologies Private Limited Approved by ISO 9001:2008 and AICTE for SKP Training Singapore | Madurai | Trichy | Coimbatore | Cochin | Kollam | Chennai, Numerical results for the reconstruction from parallel projections are presented. We also show that the inverse DRT can be used for reconstruction from fan-beam projections with equispaced detectors.EGC1347 Depth Video Enhancement Based on Weighted Mode Filtering This paper presents a novel approach for depth video enhancement. Given a high-resolution color video and its corresponding low-quality depth video, we improve the quality of the depth video by increasing its resolution and suppressing noise. For that, a weighted mode filtering method is proposed based on a joint histogram. When the histogram is generated, the weight based on color similarity between reference and neighboring pixels on the color image is computed and then used for counting each bin on the joint histogram of the depth map. A final solution is determined by seeking a global mode on the histogram. We show that the proposed method provides the optimal solution with respect to L1 norm minimization. For temporally consistent estimate on depth video, we extend this method into temporally neighboring frames. Simple optical flow estimation and patch similarity measure are used for obtaining the high-quality depth video in an efficient manner. Experimental results show that the proposed method has outstanding performance and is very efficient, compared with existing methods. We also show that the temporally consistent enhancement of depth video addresses a flickering problem and improves the accuracy of depth video.EGC1348 Design of Interpolation Functions for Subpixel-Accuracy Stereo-Vision Systems Traditionally, subpixel interpolation in stereo-vision systems was designed for the block-matching algorithm. During the evaluation of different interpolation strategies, a strong correlation was observed between the type of the stereo algorithm and the subpixel accuracy of the different solutions. Subpixel interpolation should be adapted to each stereo algorithm to achieve maximum accuracy. In consequence, it is more important to propose methodologies for interpolation function generation than specific function shapes. We propose two such methodologies based on data generated by the stereo algorithms. The first proposal uses a histogram to model the environment and applies histogram equalization to an existing solution adapting it to the data. The second proposal employs synthetic images of a known environment and applies function fitting to the resulted data. The resulting function matches the algorithm and the data as best as possible. An extensive evaluation set is used to validate the findings. Both real and synthetic test cases were employed in different scenarios. The test results are consistent and show significant improvements compared with traditional solutions.EGC1349 Discretization of Parametrizable Signal Manifolds IEEE Final Year Projects 2012 |Student Projects | Digital Image Processing Projects
  20. 20. Elysium Technologies Private Limited Approved by ISO 9001:2008 and AICTE for SKP Training Singapore | Madurai | Trichy | Coimbatore | Cochin | Kollam | Chennai, I Transformation-invariant analysis of signals often requires the computation of the distance from a test pattern to a transformation manifold. In particular, the estimation of the distances between a transformed query signal and several transformation manifolds representing different classes provides essential information for the classification of the signal. In many applications, the computation of the exact distance to the manifold is costly, whereas an efficient practical solution is the approximation of the manifold distance with the aid of a manifold grid. In this paper, we consider a setting with transformation manifolds of known parameterization. We first present an algorithm for the selection of samples from a single manifold that permits to minimize the average error in the manifold distance estimation. Then we propose a method for the joint discretization of multiple manifolds that represent different signal classes, where we optimize the transformation-invariant classification accuracy yielded by the discrete manifold representation. Experimental results show that sampling each manifold individually by minimizing the manifold distance estimation error outperforms baseline sampling solutions with respect to registration and classification accuracy. Performing an additional joint optimization on all samples improves the classification performance further. Moreover, given a fixed total number of samples to be selected from all manifolds, an asymmetric distribution of samples to different manifolds depending on their geometric structures may also increase the classification accuracy in comparison with the equal distribution of samples.EGC Discriminative Metric Preservation for Tracking Low-Resolution Targets1350 Tracking low-resolution (LR) targets is a practical yet quite challenging problem in real video analysis applications. Lack of discriminative details in the visual appearance of the LR target leads to the matching ambiguity, which confronts most existing tracking methods. Although artificially enhancing the video resolution by super resolution (SR) techniques before analyzing might be an option, the high demand of computational cost can hardly meet the requirements of the tracking scenario. This paper presents a novel solution to track LR targets without explicitly performing SR. This new approach is based on discriminative metric preservation that preserves the data affinity structure in the high-resolution (HR) feature space for effective and efficient matching of LR images. In addition, we substantialize this new approach in a solid case study of differential tracking under metric preservation and derive a closed-form solution to motion estimation for LR video. In addition, this paper extends the basic linear metric preservation method to a more powerful nonlinear kernel metric preservation method. Such a solution to LR target tracking is discriminative, robust, and efficient. Extensive experiments validate the entrustments and effectiveness of the proposed approach and demonstrate the improved performance of the proposed method in tracking LR targets.EGC Edge Strength Filter Based Color Filter Array Interpolation1351 For economic reasons, most digital cameras use color filter arrays instead of beam splitters to capture image data. As a result of this, only one of the required three color samples becomes available at each pixel location and the other two need to be interpolated. This process is called Color Filter Array (CFA) interpolation or demosaicing. Many demosaicing IEEE Final Year Projects 2012 |Student Projects | Digital Image Processing Projects
  21. 21. Elysium Technologies Private Limited Approved by ISO 9001:2008 and AICTE for SKP Training Singapore | Madurai | Trichy | Coimbatore | Cochin | Kollam | Chennai, algorithms have been introduced over the years to improve subjective and objective interpolation quality. We propose an orientation-free edge strength filter and apply it to the demosaicing problem. Edge strength filter output is utilized both to improve the initial green channel interpolation and to apply the constant color difference rule adaptively. This simple edge directed method yields visually pleasing results with high CPSNR.EGC Efficient Object Tracking by Incremental Self-Tuning Particle Filtering on the Affine Group1352 We propose an incremental self-tuning particle filtering (ISPF) framework for visual tracking on the affine group, which can find the optimal state in a chainlike way with a very small number of particles. Unlike traditional particle filtering, which only relies on random sampling for state optimization, ISPF incrementally draws particles and utilizes an online- learned pose estimator (PE) to iteratively tune them to their neighboring best states according to some feedback appearance-similarity scores. Sampling is terminated if the maximum similarity of all tuned particles satisfies a target- patch similarity distribution modeled online or if the permitted maximum number of particles is reached. With the help of the learned PE and some appearance-similarity feedback scores, particles in ISPF become ―smart‖ and can automatically move toward the correct directions; thus, sparse sampling is possible. The optimal state can be efficiently found in a step-by-step way in which some particles serve as bridge nodes to help others to reach the optimal state. In addition to the single-target scenario, the ―smart‖ particle idea is also extended into a multitarget tracking problem. Experimental results demonstrate that our ISPF can achieve great robustness and very high accuracy with only a very small number of particles.EGC Efficient Registration of Nonrigid 3-D Bodies1353 We present a novel method to perform an accurate registration of 3-D nonrigid bodies by using phase-shift properties of the dual-tree complex wavelet transform (DT-BBCWT). Since the phases of DT-BBCWT coefficients change approximately linearly with the amount of feature displacement in the spatial domain, motion can be estimated using the phase information from these coefficients. The motion estimation is performed iteratively: first by using coarser level complex coefficients to determine large motion components and then by employing finer level coefficients to refine the motion field. We use a parametric affine model to describe the motion, where the affine parameters are found locally by substituting into an optical flow model and by solving the resulting over determined set of equations. From the estimated affine parameters, the motion field between the sensed and the reference data sets can be generated, and the sensed data set then can be shifted and interpolated spatially to align with the reference data feature displacement set.EGC1354 Eye-Tracking Database for a Set of Standard Video Sequences IEEE Final Year Projects 2012 |Student Projects | Digital Image Processing Projects
  22. 22. Elysium Technologies Private Limited Approved by ISO 9001:2008 and AICTE for SKP Training Singapore | Madurai | Trichy | Coimbatore | Cochin | Kollam | Chennai, This correspondence describes a publicly available database of eye-tracking data, collected on a set of standard video sequences that are frequently used in video compression, processing, and transmission simulations. A unique feature of this database is that it contains eye-tracking data for both the first and second viewings of the sequence. We have made available the uncompressed video sequences and the raw eye-tracking data for each sequence, along with different visualizations of the data and a preliminary analysis based on two well-known visual attention models.EGC Fast Bi-Directional Prediction Selection in H.264/MPEG-4 AVC Temporal Scalable Video1355 Coding In this paper, we propose a fast algorithm that efficiently selects the temporal prediction type for the dyadic hierarchical- B prediction structure in the H.264/MPEG-4 temporal scalable video coding (SVC). We make use of the strong correlations in prediction type inheritance to eliminate the superfluous computations for the bi-directional (BI) prediction in the finer partitions, 16 × 8/8 × 16/8 × 8, by referring to the best temporal prediction type of 16 × 16. In addition, we carefully examine the relationship in motion bit-rate costs and distortions between the BI and the uni-directional temporal prediction types. As a result, we construct a set of adaptive thresholds to remove the unnecessary BI calculations. Moreover, for the block partitions smaller than 8 × 8, either the forward prediction (FW) or the backward prediction (BW) is skipped based upon the information of their 8 × 8 partitions. Hence, the proposed schemes can efficiently reduce the extensive computational burden in calculating the BI prediction. As compared to the JSVM 9.11 software, our method saves the encoding time from 48% to 67% for a large variety of test videos over a wide range of coding bit-rates and has only a minor coding performance loss.EGC1356 Feature-Specific Difference Imaging Difference images quantify changes in the object scene over time. In this paper, we use the feature-specific imaging paradigm to present methods for estimating a sequence of difference images from a sequence of compressive measurements of the object scene. Our goal is twofold. First is to design, where possible, the optimal sensing matrix for taking compressive measurements. In scenarios where such sensing matrices are not tractable, we consider plausible candidate sensing matrices that either use the available a priori information or are nonadaptive. Second, we develop closed-form and iterative techniques for estimating the difference images. We specifically look at l2 - and l1 -based methods. We show that l2-based techniques can directly estimate the difference image from the measurements without first reconstructing the object scene. This direct estimation exploits the spatial and temporal correlations between the object scene at two consecutive time instants. We further develop a method to estimate a generalized difference image from multiple measurements and use it to estimate the sequence of difference images. For l1-based estimation, we consider modified forms of the total-variation method and basis pursuit denoising. We also look at a third method that directly exploits the sparsity of the difference image. We present results to show the efficacy of these techniques and discuss the advantages of each. IEEE Final Year Projects 2012 |Student Projects | Digital Image Processing Projects
  23. 23. Elysium Technologies Private Limited Approved by ISO 9001:2008 and AICTE for SKP Training Singapore | Madurai | Trichy | Coimbatore | Cochin | Kollam | Chennai, info@elysiumtechnologies.comEGC Markov Invariants for Phylogenetic Rate Matrices Derived from Embedded Submodels1357 We consider novel phylogenetic models with rate matrices that arise via the embedding of a progenitor model on a small number of character states, into a target model on a larger number of character states. Adapting representation- theoretic results from recent investigations of Markov invariants for the general rate matrix model, we give a prescription for identifying and counting Markov invariants for such "symmetric embedded‖ models, and we provide enumerations of these for the first few cases with a small number of character states. The simplest example is a target model on three states, constructed from a general 2 state model; the "2 hookrightarrow 3‖ embedding. We show that for 2 taxa, there exist two invariants of quadratic degree that can be used to directly infer pairwise distances from observed sequences under this model. A simple simulation study verifies their theoretical expected values, and suggests that, given the appropriateness of the model class, they have superior statistical properties than the standard (log) Det invariant (which is of cubic degree for this case).EGC1358 Fourier-Domain Multichannel Autofocus for Synthetic Aperture Radar Synthetic aperture radar (SAR) imaging suffers from image focus degradation in the presence of phase errors in the received signal due to unknown platform motion or signal propagation delays. We present a new autofocus algorithm, termed Fourier-domain multichannel autofocus (FMCA) that is derived under a linear algebraic framework, allowing the SAR image to be focused in a noniterative fashion. Motivated by the mutichannel autofocus (MCA) approach, the proposed autofocus algorithm invokes the assumption of a low-return region, which generally is provided within the antenna sidelobes. Unlike MCA, FMCA works with the collected polar Fourier data directly and is capable of accommodating wide-angle monostatic SAR and bistatic SAR scenarios. Most previous SAR autofocus algorithms rely on the prior assumption that radars range of look angles is small so that the phase errors can be modeled as varying along only one dimension in the collected Fourier data. And, in some cases, implicit assumptions are made regarding the SAR scene. Performance of such autofocus algorithms degrades if the assumptions are not satisfied. The proposed algorithm has the advantage that it does not require prior assumptions about the range of look angles, nor characteristics of the scene.EGC1359 Framelet-Based Blind Motion Deblurring From a Single Image IEEE Final Year Projects 2012 |Student Projects | Digital Image Processing Projects
  24. 24. Elysium Technologies Private Limited Approved by ISO 9001:2008 and AICTE for SKP Training Singapore | Madurai | Trichy | Coimbatore | Cochin | Kollam | Chennai, How to recover a clear image from a single motion-blurred image has long been a challenging open problem in digital imaging. In this paper, we focus on how to recover a motion-blurred image due to camera shake. A regularization-based approach is proposed to remove motion blurring from the image by regularizing the sparsity of both the original image and the motion-blur kernel under tight wavelet frame systems. Furthermore, an adapted version of the split Bregman method is proposed to efficiently solve the resulting minimization problem. The experiments on both synthesized images and real images show that our algorithm can effectively remove complex motion blurring from natural images without requiring any prior information of the motion-blur kernel.EGC1360 Generalized Random Walks for Fusion of Multi-Exposure Images A single captured image of a real-world scene is usually insufficient to reveal all the details due to under- or over- exposed regions. To solve this problem, images of the same scene can be first captured under different exposure settings and then combined into a single image using image fusion techniques. In this paper, we propose a novel probabilistic model-based fusion technique for multi-exposure images. Unlike previous multi-exposure fusion methods, our method aims to achieve an optimal balance between two quality measures, i.e., local contrast and color consistency, while combining the scene details revealed under different exposures. A generalized random walks framework is proposed to calculate a globally optimal solution subject to the two quality measures by formulating the fusion problem as probability estimation. Experiments demonstrate that our algorithm generates high-quality images at low computational cost. Comparisons with a number of other techniques show that our method generates better results in most cases.EGC1361 Gradient-Based Image Recovery Methods from Incomplete Fourier Measurements A major problem in imaging applications such as magnetic resonance imaging and synthetic aperture radar is the task of trying to reconstruct an image with the smallest possible set of Fourier samples, every single one of which has a potential time and/or power cost. The theory of compressive sensing (CS) points to ways of exploiting inherent sparsity in such images in order to achieve accurate recovery using sub-Nyquist sampling schemes. Traditional CS approaches to this problem consist of solving total-variation (TV) minimization programs with Fourier measurement constraints or other variations thereof. This paper takes a different approach. Since the horizontal and vertical differences of a medical image are each more sparse or compressible than the corresponding TV image, CS methods will be more successful in recovering these differences individually. We develop an algorithm called GradientRec that uses a CS algorithm to recover the horizontal and vertical gradients and then estimates the original image from these gradients. We present two methods of solving the latter inverse problem, i.e., one based on least-square optimization and the other based on a generalized Poisson solver. After a thorough derivation of our complete algorithm, we present the results of various experiments that compare the effectiveness of the proposed method against other leading methods. IEEE Final Year Projects 2012 |Student Projects | Digital Image Processing Projects
  25. 25. Elysium Technologies Private Limited Approved by ISO 9001:2008 and AICTE for SKP Training Singapore | Madurai | Trichy | Coimbatore | Cochin | Kollam | Chennai, info@elysiumtechnologies.comEGC Hessian-Based Norm Regularization for Image Restoration with Biomedical Applications1362 We present no quadratic Hessian-based regularization methods that can be effectively used for image restoration problems in a variational framework. Motivated by the great success of the total-variation (TV) functional, we extend it to also include second-order differential operators. Specifically, we derive second-order regularizers that involve matrix norms of the Hessian operator. The definition of these functionals is based on an alternative interpretation of TV that relies on mixed norms of directional derivatives. We show that the resulting regularizers retain some of the most favorable properties of TV, i.e., convexity, homogeneity, rotation, and translation invariance, while dealing effectively with the staircase effect. We further develop an efficient minimization scheme for the corresponding objective functions. The proposed algorithm is of the iteratively reweighted least-square type and results from a majorization-minimization approach. It relies on a problem-specific preconditioned conjugate gradient method, which makes the overall minimization scheme very attractive since it can be applied effectively to large images in a reasonable computational time. We validate the overall proposed regularization framework through deblurring experiments under additive Gaussian noise on standard and biomedical images.EGC1363 Highly Parallel Line-Based Image Coding for Many Cores Computers are developing along with a new trend from the dual-core and quad-core processors to ones with tens or even hundreds of cores. Multimedia, as one of the most important applications in computers, has an urgent need to design parallel coding algorithms for compression. Taking intraframe/image coding as a start point, this paper proposes a pure line-by-line coding scheme (LBLC) to meet the need. In LBLC, an input image is processed line by line sequentially, and each line is divided into small fixed-length segments. The compression of all segments from prediction to entropy coding is completely independent and concurrent at many cores. Results on a general-purpose computer show that our scheme can get a 13.9 times speedup with 15 cores at the encoder and a 10.3 times speedup at the decoder. Ideally, such near-linear speeding relation with the number of cores can be kept for more than 100 cores. In addition to the high parallelism, the proposed scheme can perform comparatively or even better than the H.264 high profile above middle bit rates. At near-lossless coding, it outperforms H.264 more than 10 dB. At lossless coding, up to 14% bit-rate reduction is observed compared with H.264 lossless coding at the high 4:4:4 profile.EGC High-Quality Reflection Separation Using Polarized Images1364 In this paper, we deal with a problem of separating the effect of reflection from images captured behind glass. The input consists of multiple polarized images captured from the same view point but with different polarizer angles. The output is the high quality separation of the reflection layer and the background layer from the images. We formulate this IEEE Final Year Projects 2012 |Student Projects | Digital Image Processing Projects
  26. 26. Elysium Technologies Private Limited Approved by ISO 9001:2008 and AICTE for SKP Training Singapore | Madurai | Trichy | Coimbatore | Cochin | Kollam | Chennai, problem as a constrained optimization problem and propose a framework that allows us to fully exploit the mutually exclusive image information in our input data. We test our approach on various images and demonstrate that our approach can generate good reflection separation results.EGC Histogram Contextualization1365 Histograms have been widely used for feature representation in image and video content analysis. However, due to the orderless nature of the summarization process, histograms generally lack spatial information. This may degrade their discrimination capability in visual classification tasks. Although there have been several research attempts to encode spatial context into histograms, how to extend the encodings to higher order spatial context is still an open problem. In this paper,we propose a general histogram contextualization method to encode efficiently higher order spatial context. The method is based on the cooccurrence of local visual homogeneity patterns and hence is able to generate more discriminative histogram representations while remaining compact and robust. Moreover, we also investigate how to extend the histogram contextualization to multiple modalities of context. It is shown that the proposed method can be naturally extended to combine both temporal and spatial context and facilitate video content analysis. In addition, a method to combine cross-feature context with spatial context via the technique of random forest is also introduced in this paper. Comprehensive experiments on face image classification and human activity recognition tasks demonstrate the superiority of the proposed histogram contextualization method compared with the existing encoding methods.EGC Human Gait Recognition Using Patch Distribution Feature and Locality-Constrained1366 Group Sparse Representation In this paper, we propose a new patch distribution feature (PDF) (i.e., referred to as Gabor-PDF) for human gait recognition. We represent each gait energy image (GEI) as a set of local augmented Gabor features, which concatenate the Gabor features extracted from different scales and different orientations together with the X-Y coordinates. We learn a global Gaussian mixture model (GMM) (i.e., referred to as the universal background model) with the local augmented Gabor features from all the gallery GEIs; then, each gallery or probe GEI is further expressed as the normalized parameters of an image-specific GMM adapted from the global GMM. Observing that one video is naturally represented as a group of GEIs, we also propose a new classification method called locality-constrained group sparse representation (LGSR) to classify each probe video by minimizing the weighted l1, 2 mixed-norm-regularized reconstruction errors with respect to the gallery videos. In contrast to the standard group sparse representation method that is a special case of LGSR, the group sparsity and local smooth sparsity constraints are both enforced in LGSR. Our comprehensive experiments on the benchmark USF HumanID database demonstrate the effectiveness of the newly proposed feature Gabor-PDF and the new classification method LGSR for human gait recognition. Moreover, LGSR using the new feature IEEE Final Year Projects 2012 |Student Projects | Digital Image Processing Projects