This document discusses a technique called quadrature Radon transform for tomographic reconstruction. The quadrature Radon transform uses projections from two angles (θ and θ+π/2) rather than just one angle as in conventional Radon transform. This provides additional information that can yield smoother reconstructions. Two approaches are proposed: 1) treating the two sets of projections as real and imaginary parts of a complex number, or 2) averaging the individual back projections. Experimental results show the quadrature Radon transform produces numerically and visually better reconstructions compared to using a single set of projections.
This document presents and compares three approximation methods for thin plate spline mappings that reduce the computational complexity from O(p3) to O(m3), where m is a small subset of points p. Method 1 uses only the subset of points to estimate the mapping. Method 2 uses the subset of basis functions with all target values. Method 3 approximates the full matrix using the Nyström method. Experiments on synthetic grids show Method 3 has the lowest error, followed by Method 2, with Method 1 having the highest error. The three methods trade off accuracy, computation time, and the ability to do principal warp analysis.
The document summarizes a study on fractal image compression of satellite images using range and domain techniques. It discusses fractal image compression methods, including partitioning images into range and domain blocks. Affine transformations are applied to domain blocks to match range blocks. Peak signal-to-noise ratio (PSNR) values are calculated for reconstructed rural and urban satellite images after 4 iterations, showing PSNR of around 17.0 for rural images and 22.0 for urban images. The proposed algorithm partitions the original image into non-overlapping range blocks and selects domain blocks twice the size of range blocks.
Double patterning lithography is a technique used to overcome the limitations of optical lithography. It works by splitting the mask pattern into two separate exposures to reduce feature density. The document describes the key techniques used in their software to solve the layout splitting problem, including a novel polygon cutting algorithm, dynamic priority search trees, and representing the problem as a tri-connected graph to decompose it into independent subproblems. Experimental results showed their method achieved a 3-10x speedup over other approaches.
The document discusses double patterning lithography (DPL) as a technique to overcome limitations of optical lithography at smaller feature sizes. It presents an algorithm for DPL layout splitting that involves: (1) decomposing the conflict graph into tri-connected components, (2) solving each component independently, and (3) merging the solutions. Experimental results showed the algorithm achieved a 3-10x speedup over previous approaches.
This document proposes novel uses of a hypothetical higher-dimensional rasterizer in future graphics hardware. It discusses six potential applications: 1) Continuous collision detection by exploiting the volume swept by moving triangles, 2) Caustic rendering using defocused triangle extents for sampling, 3) Using the rasterizer for flexible higher-dimensional sampling. It also discusses 4) Rendering glossy reflections/refractions by rendering from multiple viewpoints simultaneously, 5) Motion blurred soft shadows through appropriate use of time and sampling, and 6) Stereoscopic and multi-view rendering with motion and defocus blur. The document presents a model for an efficient 5D stochastic rasterization pipeline to support these and other potential uses.
Image Restoration and Denoising By Using Nonlocally Centralized Sparse Repres...IJERA Editor
Due to the degradation of observed image the noisy, blurred, Distorted image can be occurred .for restoring image information we propose the sparse representations by conventional modelsmay not be accurate enough for a faithful reconstruction of the original image. To improve the performance of sparse representation-based image restoration,In this method the sparse coding noise is added for image restoration, due to this image restoration the sparse coefficients of original image can be detected. The so-called nonlocally centralized sparse representation (NCSR) model is as simple as the standard sparse representation model,for denoising the image here we use the Histogram clipping method by using histogram based sparse representation effectively reduce the noise.and also implement the TMR filter for Quality image.various types of image restoration problems, including denoising, deblurring and super-resolution, validate the generality and state-of-the-art performance of the proposed algorithm.
A Quest for Subexponential Time Parameterized Algorithms for Planar-k-Path: F...cseiitgn
The document summarizes a talk on obtaining subexponential time algorithms for NP-hard problems on planar graphs. It discusses using treewidth and tree decompositions to solve problems like 3-coloring in 2O(√n) time on n-vertex planar graphs. It also discusses the exponential time hypothesis and how it implies lower bounds, showing these algorithms are optimal up to constant factors in the exponent. The document outlines several chapters, including using grid minors and bidimensionality to obtain 2O(√k) algorithms for problems like k-path, even for some W[1]-hard problems parameterized by k.
In topological inference, the goal is to extract information about a shape, given only a sample of points from it. There are many approaches to this problem, but the one we focus on is persistent homology. We get a view of the data at different scales by imagining the points are balls and consider different radii. The shape information we want comes in the form of a persistence diagram, which describes the components, cycles, bubbles, etc in the space that persist over a range of different scales.
To actually compute a persistence diagram in the geometric setting, previous work required complexes of size n^O(d). We reduce this complexity to O(n) (hiding some large constants depending on d) by using ideas from mesh generation.
This talk will not assume any knowledge of topology. This is joint work with Gary Miller, Benoit Hudson, and Steve Oudot.
This document presents and compares three approximation methods for thin plate spline mappings that reduce the computational complexity from O(p3) to O(m3), where m is a small subset of points p. Method 1 uses only the subset of points to estimate the mapping. Method 2 uses the subset of basis functions with all target values. Method 3 approximates the full matrix using the Nyström method. Experiments on synthetic grids show Method 3 has the lowest error, followed by Method 2, with Method 1 having the highest error. The three methods trade off accuracy, computation time, and the ability to do principal warp analysis.
The document summarizes a study on fractal image compression of satellite images using range and domain techniques. It discusses fractal image compression methods, including partitioning images into range and domain blocks. Affine transformations are applied to domain blocks to match range blocks. Peak signal-to-noise ratio (PSNR) values are calculated for reconstructed rural and urban satellite images after 4 iterations, showing PSNR of around 17.0 for rural images and 22.0 for urban images. The proposed algorithm partitions the original image into non-overlapping range blocks and selects domain blocks twice the size of range blocks.
Double patterning lithography is a technique used to overcome the limitations of optical lithography. It works by splitting the mask pattern into two separate exposures to reduce feature density. The document describes the key techniques used in their software to solve the layout splitting problem, including a novel polygon cutting algorithm, dynamic priority search trees, and representing the problem as a tri-connected graph to decompose it into independent subproblems. Experimental results showed their method achieved a 3-10x speedup over other approaches.
The document discusses double patterning lithography (DPL) as a technique to overcome limitations of optical lithography at smaller feature sizes. It presents an algorithm for DPL layout splitting that involves: (1) decomposing the conflict graph into tri-connected components, (2) solving each component independently, and (3) merging the solutions. Experimental results showed the algorithm achieved a 3-10x speedup over previous approaches.
This document proposes novel uses of a hypothetical higher-dimensional rasterizer in future graphics hardware. It discusses six potential applications: 1) Continuous collision detection by exploiting the volume swept by moving triangles, 2) Caustic rendering using defocused triangle extents for sampling, 3) Using the rasterizer for flexible higher-dimensional sampling. It also discusses 4) Rendering glossy reflections/refractions by rendering from multiple viewpoints simultaneously, 5) Motion blurred soft shadows through appropriate use of time and sampling, and 6) Stereoscopic and multi-view rendering with motion and defocus blur. The document presents a model for an efficient 5D stochastic rasterization pipeline to support these and other potential uses.
Image Restoration and Denoising By Using Nonlocally Centralized Sparse Repres...IJERA Editor
Due to the degradation of observed image the noisy, blurred, Distorted image can be occurred .for restoring image information we propose the sparse representations by conventional modelsmay not be accurate enough for a faithful reconstruction of the original image. To improve the performance of sparse representation-based image restoration,In this method the sparse coding noise is added for image restoration, due to this image restoration the sparse coefficients of original image can be detected. The so-called nonlocally centralized sparse representation (NCSR) model is as simple as the standard sparse representation model,for denoising the image here we use the Histogram clipping method by using histogram based sparse representation effectively reduce the noise.and also implement the TMR filter for Quality image.various types of image restoration problems, including denoising, deblurring and super-resolution, validate the generality and state-of-the-art performance of the proposed algorithm.
A Quest for Subexponential Time Parameterized Algorithms for Planar-k-Path: F...cseiitgn
The document summarizes a talk on obtaining subexponential time algorithms for NP-hard problems on planar graphs. It discusses using treewidth and tree decompositions to solve problems like 3-coloring in 2O(√n) time on n-vertex planar graphs. It also discusses the exponential time hypothesis and how it implies lower bounds, showing these algorithms are optimal up to constant factors in the exponent. The document outlines several chapters, including using grid minors and bidimensionality to obtain 2O(√k) algorithms for problems like k-path, even for some W[1]-hard problems parameterized by k.
In topological inference, the goal is to extract information about a shape, given only a sample of points from it. There are many approaches to this problem, but the one we focus on is persistent homology. We get a view of the data at different scales by imagining the points are balls and consider different radii. The shape information we want comes in the form of a persistence diagram, which describes the components, cycles, bubbles, etc in the space that persist over a range of different scales.
To actually compute a persistence diagram in the geometric setting, previous work required complexes of size n^O(d). We reduce this complexity to O(n) (hiding some large constants depending on d) by using ideas from mesh generation.
This talk will not assume any knowledge of topology. This is joint work with Gary Miller, Benoit Hudson, and Steve Oudot.
This document summarizes a paper that presents new algorithms for solving the cyclic order-preserving assignment problem (COPAP) and related sub-problem, the linear order-preserving assignment problem (LOPAP). It introduces a new point-assignment cost function called the Procrustean local shape distance (PLSD) and explores heuristics for using the A* search algorithm to more efficiently solve COPAP and LOPAP. Experimental results on the MPEG-7 shape dataset are presented and recommendations are made for solving COPAP/LOPAP in practice.
International Journal of Engineering Research and DevelopmentIJERD Editor
This document presents a technique for estimating parameters of a deployable mesh reflector antenna using 3D coordinate data and least squares fitting. It involves determining the unknown coefficients of the general quadratic surface equation that best fits the 3D points. The shape of the surface is then estimated as an elliptic paraboloid based on its invariants. Key parameters of the elliptic paraboloid like the focal length are then determined by reconstructing the surface in its standard form based on the estimated coefficients and orientations. Estimating these parameters at different stages of deployment testing can help validate the stability of the antenna surface and placement of its feed.
This document presents a paper that proposes an image registration algorithm using log-polar transform and FFT-based correlation. The algorithm first estimates the angle, scale, and translation between two images by converting them to the log-polar domain, where rotation and scaling appear as translation. It then recovers the residual translation using gradient correlation in the spatial domain. The algorithm is tested on various images related by similarity transformations and is shown to accurately recover scales up to 5.85 times while being robust to noise. It provides a computationally efficient way to register images using properties of the Fourier transform and log-polar mappings.
Chapter summary and solutions to end-of-chapter exercises for "Data Visualization: Principles and Practice" book by Alexandru C. Telea
Chapter provides an overview of a number of methods for visualizing tensor data. It explains principal component analysis as a technique used to process a tensor matrix and extract from it information that can directly be used in its visualization. It forms a fundamental part of many tensor data processing and visualization algorithms. Section 7.4 shows how the results of the principal component analysis can be visualized using the simple color-mapping techniques. Next parts of the chapter explain how same data can be visualized using tensor glyphs, and streamline-like visualization techniques.
In contrast to Slicer, which is a more general framework for analyzing and visualizing 3D slice-based data volumes, the Diffusion Toolkit focuses on DT-MRI datasets, and thus offers more extensive and easier to use options for fiber tracking.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
1) The document proposes generalizing windowed Fourier analysis to signals on graphs by defining generalized convolution, translation, and modulation operators for graphs.
2) It introduces a windowed graph Fourier transform that enables vertex-frequency analysis, analogous to time-frequency analysis.
3) When applied to a signal with frequency components varying along a path graph, the resulting spectrogram matches intuition from classical discrete-time signal processing, demonstrating the approach can analyze signals on any graph.
UMAP is a technique for dimensionality reduction that was proposed 2 years ago that quickly gained widespread usage for dimensionality reduction.
In this presentation I will try to demistyfy UMAP by comparing it to tSNE. I also sketch its theoretical background in topology and fuzzy sets.
Image Restoration UsingNonlocally Centralized Sparse Representation and histo...IJERA Editor
Due to the degradation of observed image the noisy, blurred, distorted image can be occurred .To restore the image informationby conventional modelsmay not be accurate enough for faithful reconstruction of the original image. I propose the sparse representations to improve the performance of based image restoration. In this method the sparse coding noise is added for image restoration, due to this image restoration the sparse coefficients of original image can be detected. The so-called nonlocally centralized sparse representation (NCSR) model is as simple as the standard sparse representation model, fordenoising the image here we use the histogram clipping method by using histogram based sparse representation to effectively reduce the noise and also implement the TMR filter for Quality image. Various types of image restoration problems, including denoising, deblurring and super-resolution, validate the generality and state-of-the-art performance of the proposed algorithm.
An approach to decrease dimentions of logicalijcsa
In this paper we consider manufacturing logical elements with function AND-NOT based on bipolar transistors.Based on recently considered approach to decrease dimensions of solid state electronic devices with the same time increasing of their performance we introduce an approach to decrease dimensions of transistors and p-n-junctions, which became a part of the logical element. Framework the approach a heterostructure
with required configuration should be manufactured. After the manufacture required areas of the heterostructures should be doped by diffusion or ion implantation. The doping should be finished by optimized annealing of dopant and/or radiation defects.
Finite-difference modeling, accuracy, and boundary conditions- Arthur Weglein...Arthur Weglein
This short report gives a brief review on the finite difference modeling method used in MOSRP
and its boundary conditions as a preparation for the Green’s theorem RTM. The first
part gives the finite difference formulae we used and the second part describes the implemented
boundary conditions. The last part, using two examples, points out some impacts of the accuracy
of source fields on the results of modeling.
This document discusses computing canonical labelings of digraphs. It begins by reviewing key concepts like digraphs, adjacency matrices, and isomorphisms. It notes that while many algorithms exist for undirected graphs, computing canonical labelings of digraphs remains challenging. The document then presents several new theoretical concepts for digraph canonical labeling, including mix diffusion degree sequences. It proposes using these concepts to systematically compute canonical labelings and proves several theorems to guide the algorithm. It describes four algorithms for calculating the canonical labeling of a digraph and notes the algorithms have been preliminarily verified through software testing.
USING ADAPTIVE AUTOMATA IN GRAMMAR-BASED TEXT COMPRESSION TO IDENTIFY FREQUEN...ijcsit
Compression techniques allow reduction in the data storage space required by applications dealing with large amount of data by increasing the information entropy of its representation. This paper presents an adaptive rule-driven device - the adaptive automata - as the device to identify recurring sequences of symbols to be compressed in a grammar-based lossless data compression scheme.
This document discusses the Fundamental Theorem of Calculus, which connects differentiation and integration. It states that differentiation and integration are inverse operations, analogous to how multiplication and division are inverse. The document provides examples of using the Fundamental Theorem to evaluate definite integrals and find average values. It also introduces the Second Fundamental Theorem of Calculus and the Net Change Theorem, which relate integrals and antiderivatives.
This document describes a new machine learning algorithm called the Balancing Board Machine (BBM). BBM approximates the centroid of the polyhedral cone that represents the version space in kernel machines. It does this by computing the centroid of the intersection between the version space cone and a hyperplane, and then iteratively "balancing" the hyperplane towards the centroid. This process improves the generalization performance over support vector machines. The document provides mathematical details on exactly computing polytope centroids and deriving efficient approximation algorithms for implementing BBM.
The document reports on the results of three image processing projects. The first project implemented Lloyd-Max quantization to reduce image file sizes and Retinex theory to compensate for uneven illumination. The second project used principal component analysis to compute eigenfaces for face recognition. The third project performed linear discriminant analysis and tensor-based linear discriminant analysis for binary classification and visual object recognition. Illumination compensation subtracted an estimated illumination plane from image intensities to reduce shadows. Eigenfaces were the principal components of a training set of face images. Tensor-based linear discriminant analysis treated images as higher-order tensors to outperform conventional LDA.
Area efficient parallel LFSR for cyclic redundancy check IJECEIAES
Cyclic Redundancy Check (CRC), code for error detection finds many applications in the field of digital communication, data storage, control system and data compression. CRC encoding operation is carried out by using a Linear Feedback Shift Register (LFSR). Serial implementation of CRC requires more clock cycles which is equal to data message length plus generator polynomial degree but in parallel implementation of CRC one clock cycle is required if a whole data message is applied at a time. In previous work related to parallel LFSR, hardware complexity of the architecture reduced using a technique named state space transformation. This paper presents detailed explaination of search algorithm implementation and technique to find number of XOR gates required for different CRC algorithms. This paper presents a searching algorithm and new technique to find the number of XOR gates required for different CRC algorithms. The comparison between proposed and previous architectures shows that the number of XOR gates are reduced for CRC algorithms which improve the hardware efficiency. Searching algorithm and all the matrix computations have been performed using MATLAB simulations.
Chapter summary and solutions to end-of-chapter exercises for "Data Visualization: Principles and Practice" book by Alexandru C. Telea
This chapter lays out a discussion on discrete data representation, continuous data sampling and re- construction. Fundamental differences between continuous (sampled) and discrete data are outlined. It introduces basic functions, discrete meshes and cells as means of constructing piecewise continuous approximations from sampled data. One learns about various types of datasets commonly used in the visualization practice: their advantages, limitations and constraints. This chapter gives an understanding of various trade-offs involved in the choice of a dataset for a given visualization application while focuses on efficiency of implementing the most commonly used datasets presented with cell types in d ∈ [0, 3] dimensions.
This document provides solutions to problems marked with a star in the second edition of the textbook "Digital Image Processing" by Rafael C. Gonzalez and Richard E. Woods. The solutions cover topics from several chapters of the textbook, including image formation, image transforms, histogram processing, and spatial filtering. The problems address concepts such as image sampling, image compression, histogram equalization, and linear spatial filtering. Detailed explanations and illustrations are provided for each problem solution.
This document contains solutions to problems marked with a star in the second edition of the textbook "Digital Image Processing" by Rafael C. Gonzalez and Richard E. Woods. The solutions are provided for students and can also be downloaded from the book's website. The document includes introductory information, the solutions themselves which involve figures and mathematical expressions, and references back to chapters and equations from the textbook.
Mimo radar detection in compound gaussian clutter using orthogonal discrete f...ijma
This paper proposes orthogonal Discrete Frequency Coding Space Time Waveforms (DFCSTW) for
Multiple Input and Multiple Output (MIMO) radar detection in compound Gaussian clutter. The proposed
orthogonal waveforms are designed considering the position and angle of the transmitting antenna when
viewed from origin. These orthogonally optimized show good resolution in spikier clutter with Generalized
Likelihood Ratio Test (GLRT) detector. The simulation results show that this waveform provides better
detection performance in spikier Clutter.
4 satellite image fusion using fast discreteAlok Padole
This document proposes a new satellite image fusion method using Fast Discrete Curvelet Transforms (FDCT) that aims to generate high resolution multispectral images while retaining both rich spatial and spectral details. The method defines a fusion rule based on local magnitude ratio in the FDCT domain to inject high frequency details from a high resolution panchromatic image into lower resolution multispectral bands. Experimental results on Resourcesat-1 LISS IV and Cartosat-1 images show the proposed FDCT fusion method spatially outperforms wavelet, PCA, high pass filtering, IHS, and Gram-Schmidt fusion methods based on entropy and QAB/F metrics.
Parallel algorithm for computing edt with new architectureIAEME Publication
This document summarizes a parallel algorithm for computing the Euclidean distance transform of a binary image. It presents a 3-step parallel algorithm that computes Euclidean distances using only integer arithmetic on pixel neighborhoods. It also describes a pipelined 2D array architecture suitable for hardware implementation of the algorithm, featuring identical processing elements connected in a regular structure. The architecture aims to efficiently compute Euclidean distance transforms of images with scalability to different image sizes.
This document summarizes a paper that presents new algorithms for solving the cyclic order-preserving assignment problem (COPAP) and related sub-problem, the linear order-preserving assignment problem (LOPAP). It introduces a new point-assignment cost function called the Procrustean local shape distance (PLSD) and explores heuristics for using the A* search algorithm to more efficiently solve COPAP and LOPAP. Experimental results on the MPEG-7 shape dataset are presented and recommendations are made for solving COPAP/LOPAP in practice.
International Journal of Engineering Research and DevelopmentIJERD Editor
This document presents a technique for estimating parameters of a deployable mesh reflector antenna using 3D coordinate data and least squares fitting. It involves determining the unknown coefficients of the general quadratic surface equation that best fits the 3D points. The shape of the surface is then estimated as an elliptic paraboloid based on its invariants. Key parameters of the elliptic paraboloid like the focal length are then determined by reconstructing the surface in its standard form based on the estimated coefficients and orientations. Estimating these parameters at different stages of deployment testing can help validate the stability of the antenna surface and placement of its feed.
This document presents a paper that proposes an image registration algorithm using log-polar transform and FFT-based correlation. The algorithm first estimates the angle, scale, and translation between two images by converting them to the log-polar domain, where rotation and scaling appear as translation. It then recovers the residual translation using gradient correlation in the spatial domain. The algorithm is tested on various images related by similarity transformations and is shown to accurately recover scales up to 5.85 times while being robust to noise. It provides a computationally efficient way to register images using properties of the Fourier transform and log-polar mappings.
Chapter summary and solutions to end-of-chapter exercises for "Data Visualization: Principles and Practice" book by Alexandru C. Telea
Chapter provides an overview of a number of methods for visualizing tensor data. It explains principal component analysis as a technique used to process a tensor matrix and extract from it information that can directly be used in its visualization. It forms a fundamental part of many tensor data processing and visualization algorithms. Section 7.4 shows how the results of the principal component analysis can be visualized using the simple color-mapping techniques. Next parts of the chapter explain how same data can be visualized using tensor glyphs, and streamline-like visualization techniques.
In contrast to Slicer, which is a more general framework for analyzing and visualizing 3D slice-based data volumes, the Diffusion Toolkit focuses on DT-MRI datasets, and thus offers more extensive and easier to use options for fiber tracking.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
1) The document proposes generalizing windowed Fourier analysis to signals on graphs by defining generalized convolution, translation, and modulation operators for graphs.
2) It introduces a windowed graph Fourier transform that enables vertex-frequency analysis, analogous to time-frequency analysis.
3) When applied to a signal with frequency components varying along a path graph, the resulting spectrogram matches intuition from classical discrete-time signal processing, demonstrating the approach can analyze signals on any graph.
UMAP is a technique for dimensionality reduction that was proposed 2 years ago that quickly gained widespread usage for dimensionality reduction.
In this presentation I will try to demistyfy UMAP by comparing it to tSNE. I also sketch its theoretical background in topology and fuzzy sets.
Image Restoration UsingNonlocally Centralized Sparse Representation and histo...IJERA Editor
Due to the degradation of observed image the noisy, blurred, distorted image can be occurred .To restore the image informationby conventional modelsmay not be accurate enough for faithful reconstruction of the original image. I propose the sparse representations to improve the performance of based image restoration. In this method the sparse coding noise is added for image restoration, due to this image restoration the sparse coefficients of original image can be detected. The so-called nonlocally centralized sparse representation (NCSR) model is as simple as the standard sparse representation model, fordenoising the image here we use the histogram clipping method by using histogram based sparse representation to effectively reduce the noise and also implement the TMR filter for Quality image. Various types of image restoration problems, including denoising, deblurring and super-resolution, validate the generality and state-of-the-art performance of the proposed algorithm.
An approach to decrease dimentions of logicalijcsa
In this paper we consider manufacturing logical elements with function AND-NOT based on bipolar transistors.Based on recently considered approach to decrease dimensions of solid state electronic devices with the same time increasing of their performance we introduce an approach to decrease dimensions of transistors and p-n-junctions, which became a part of the logical element. Framework the approach a heterostructure
with required configuration should be manufactured. After the manufacture required areas of the heterostructures should be doped by diffusion or ion implantation. The doping should be finished by optimized annealing of dopant and/or radiation defects.
Finite-difference modeling, accuracy, and boundary conditions- Arthur Weglein...Arthur Weglein
This short report gives a brief review on the finite difference modeling method used in MOSRP
and its boundary conditions as a preparation for the Green’s theorem RTM. The first
part gives the finite difference formulae we used and the second part describes the implemented
boundary conditions. The last part, using two examples, points out some impacts of the accuracy
of source fields on the results of modeling.
This document discusses computing canonical labelings of digraphs. It begins by reviewing key concepts like digraphs, adjacency matrices, and isomorphisms. It notes that while many algorithms exist for undirected graphs, computing canonical labelings of digraphs remains challenging. The document then presents several new theoretical concepts for digraph canonical labeling, including mix diffusion degree sequences. It proposes using these concepts to systematically compute canonical labelings and proves several theorems to guide the algorithm. It describes four algorithms for calculating the canonical labeling of a digraph and notes the algorithms have been preliminarily verified through software testing.
USING ADAPTIVE AUTOMATA IN GRAMMAR-BASED TEXT COMPRESSION TO IDENTIFY FREQUEN...ijcsit
Compression techniques allow reduction in the data storage space required by applications dealing with large amount of data by increasing the information entropy of its representation. This paper presents an adaptive rule-driven device - the adaptive automata - as the device to identify recurring sequences of symbols to be compressed in a grammar-based lossless data compression scheme.
This document discusses the Fundamental Theorem of Calculus, which connects differentiation and integration. It states that differentiation and integration are inverse operations, analogous to how multiplication and division are inverse. The document provides examples of using the Fundamental Theorem to evaluate definite integrals and find average values. It also introduces the Second Fundamental Theorem of Calculus and the Net Change Theorem, which relate integrals and antiderivatives.
This document describes a new machine learning algorithm called the Balancing Board Machine (BBM). BBM approximates the centroid of the polyhedral cone that represents the version space in kernel machines. It does this by computing the centroid of the intersection between the version space cone and a hyperplane, and then iteratively "balancing" the hyperplane towards the centroid. This process improves the generalization performance over support vector machines. The document provides mathematical details on exactly computing polytope centroids and deriving efficient approximation algorithms for implementing BBM.
The document reports on the results of three image processing projects. The first project implemented Lloyd-Max quantization to reduce image file sizes and Retinex theory to compensate for uneven illumination. The second project used principal component analysis to compute eigenfaces for face recognition. The third project performed linear discriminant analysis and tensor-based linear discriminant analysis for binary classification and visual object recognition. Illumination compensation subtracted an estimated illumination plane from image intensities to reduce shadows. Eigenfaces were the principal components of a training set of face images. Tensor-based linear discriminant analysis treated images as higher-order tensors to outperform conventional LDA.
Area efficient parallel LFSR for cyclic redundancy check IJECEIAES
Cyclic Redundancy Check (CRC), code for error detection finds many applications in the field of digital communication, data storage, control system and data compression. CRC encoding operation is carried out by using a Linear Feedback Shift Register (LFSR). Serial implementation of CRC requires more clock cycles which is equal to data message length plus generator polynomial degree but in parallel implementation of CRC one clock cycle is required if a whole data message is applied at a time. In previous work related to parallel LFSR, hardware complexity of the architecture reduced using a technique named state space transformation. This paper presents detailed explaination of search algorithm implementation and technique to find number of XOR gates required for different CRC algorithms. This paper presents a searching algorithm and new technique to find the number of XOR gates required for different CRC algorithms. The comparison between proposed and previous architectures shows that the number of XOR gates are reduced for CRC algorithms which improve the hardware efficiency. Searching algorithm and all the matrix computations have been performed using MATLAB simulations.
Chapter summary and solutions to end-of-chapter exercises for "Data Visualization: Principles and Practice" book by Alexandru C. Telea
This chapter lays out a discussion on discrete data representation, continuous data sampling and re- construction. Fundamental differences between continuous (sampled) and discrete data are outlined. It introduces basic functions, discrete meshes and cells as means of constructing piecewise continuous approximations from sampled data. One learns about various types of datasets commonly used in the visualization practice: their advantages, limitations and constraints. This chapter gives an understanding of various trade-offs involved in the choice of a dataset for a given visualization application while focuses on efficiency of implementing the most commonly used datasets presented with cell types in d ∈ [0, 3] dimensions.
This document provides solutions to problems marked with a star in the second edition of the textbook "Digital Image Processing" by Rafael C. Gonzalez and Richard E. Woods. The solutions cover topics from several chapters of the textbook, including image formation, image transforms, histogram processing, and spatial filtering. The problems address concepts such as image sampling, image compression, histogram equalization, and linear spatial filtering. Detailed explanations and illustrations are provided for each problem solution.
This document contains solutions to problems marked with a star in the second edition of the textbook "Digital Image Processing" by Rafael C. Gonzalez and Richard E. Woods. The solutions are provided for students and can also be downloaded from the book's website. The document includes introductory information, the solutions themselves which involve figures and mathematical expressions, and references back to chapters and equations from the textbook.
Mimo radar detection in compound gaussian clutter using orthogonal discrete f...ijma
This paper proposes orthogonal Discrete Frequency Coding Space Time Waveforms (DFCSTW) for
Multiple Input and Multiple Output (MIMO) radar detection in compound Gaussian clutter. The proposed
orthogonal waveforms are designed considering the position and angle of the transmitting antenna when
viewed from origin. These orthogonally optimized show good resolution in spikier clutter with Generalized
Likelihood Ratio Test (GLRT) detector. The simulation results show that this waveform provides better
detection performance in spikier Clutter.
4 satellite image fusion using fast discreteAlok Padole
This document proposes a new satellite image fusion method using Fast Discrete Curvelet Transforms (FDCT) that aims to generate high resolution multispectral images while retaining both rich spatial and spectral details. The method defines a fusion rule based on local magnitude ratio in the FDCT domain to inject high frequency details from a high resolution panchromatic image into lower resolution multispectral bands. Experimental results on Resourcesat-1 LISS IV and Cartosat-1 images show the proposed FDCT fusion method spatially outperforms wavelet, PCA, high pass filtering, IHS, and Gram-Schmidt fusion methods based on entropy and QAB/F metrics.
Parallel algorithm for computing edt with new architectureIAEME Publication
This document summarizes a parallel algorithm for computing the Euclidean distance transform of a binary image. It presents a 3-step parallel algorithm that computes Euclidean distances using only integer arithmetic on pixel neighborhoods. It also describes a pipelined 2D array architecture suitable for hardware implementation of the algorithm, featuring identical processing elements connected in a regular structure. The architecture aims to efficiently compute Euclidean distance transforms of images with scalability to different image sizes.
The document outlines the contents of a Mathematics-I course including ordinary differential equations, linear differential equations, mean value theorems, functions of several variables, curvature, applications of integration, multiple integrals, series and sequences, vector differentiation and integration. It provides textbooks and references for the course and outlines 12 lectures covering applications of integration such as length, volume, surface area and multiple integrals including change of order and change of variables in double and triple integrals.
Outage performance of underlay cognitive radio networks over mix fading envir...IJECEIAES
In this paper, the underlay cognitive radio network over mix fading environment is presented and investigated. A cooperative cognitive system with a secondary source node S, a secondary destination node D, secondary relay node Relay, and a primary node P are considered. In this model system, we consider the mix fading environment in two scenarios as Rayleigh/ Nakagami-m and Nakagami-m/Rayleigh Fading channels. For system performance analysis, the closed-form expression of the system outage probability (OP) and the integral-formed expression of the ergodic capacity (EC) are derived in connection with the system's primary parameters. Finally, we proposed the Monte Carlo simulation for convincing the correctness of the system performance.
This document presents results from a lattice QCD calculation of the proton isovector scalar charge (gs) at two light quark masses. The calculation uses domain-wall fermions and Iwasaki gauge actions on a 323x64 lattice with a spacing of 0.144 fm. Ratios of three-point to two-point correlation functions are formed and fit to a plateau to extract gs. Values of gs are obtained for quark masses of 0.0042 and 0.001, and all-mode averaging is used for the lighter mass. Chiral perturbation theory will be used to extrapolate gs to the physical quark mass. Preliminary results for gs at the unphysical quark masses are reported in lattice units.
This document summarizes finite difference modeling methods used at M-OSRP. It discusses:
1) The second order time and fourth order space finite difference schemes used to model acoustic wave propagation.
2) How boundary conditions like Dirichlet/Neumann generate strong spurious reflections that can mask true events.
3) The importance of accurate source fields for modeling - better source fields lead to more accurate linear inversions and the ability to observe phenomena like polarity reversals in modeled data.
This paper presents a geometric approach to the coordinatization of a measured space called the Map
Maker’s algorithm. The measured space is defined by a distance matrix for sites which are reordered and
mapped to points in a two-dimensional Euclidean space. The algorithm is tested on distance matrices
created from 2D random point sets and the resulting coordinatizations compared with the original point
sets for confirmation. Tolerance levels are set to deal with the cumulative numerical errors in the
processing of the algorithm. The final point sets are found to be the same apart from translations,
reflections and rotations as expected. The algorithm also serves as a method for projecting higher
dimensional data to 2D.
This paper presents a geometric approach to the coordinatization of a measured space called the Map
Maker’s algorithm. The measured space is defined by a distance matrix for sites which are reordered and
mapped to points in a two-dimensional Euclidean space. The algorithm is tested on distance matrices
created from 2D random point sets and the resulting coordinatizations compared with the original point
sets for confirmation. Tolerance levels are set to deal with the cumulative numerical errors in the
processing of the algorithm. The final point sets are found to be the same apart from translations,
reflections and rotations as expected. The algorithm also serves as a method for projecting higher
dimensional data to 2D.
Fourier-transform analysis of a unilateral fin line and its derivativesYong Heui Cho
This document presents a Fourier-transform analysis of a unilateral n-line and its derivatives. The key points are:
1) A unilateral n-line is transformed into an equivalent problem of multiple suspended substrate microstrip lines using the image theorem and Fourier transform.
2) New rigorous dispersion relations are derived in the form of rapidly convergent series solutions, providing an accurate yet efficient method for numerical computation.
3) The dispersion relations for derivatives of the unilateral n-line including suspended substrate microstrip lines, microstrip lines, slot lines and coplanar waveguides are also presented.
Propagation Model for Tree Blockage in Mobile Communication Systems using Uni...IJEEE
A deterministic model for spherical wave propagation in microcellular urban environment is presented. This methodis based on ray tracing and uniform theory of diffraction. The various ray contributions includes the diffraction from corner of obstacle, reflection from ground and direct wave. According to the height of transmitting station, different ray paths will be presented in the received signal. Calculation is carried out using hard and soft polarization. Results can be used for the planning of new cellular system.
This document proposes a low complexity algorithm for jointly estimating the reflection coefficient, spatial location, and Doppler shift of a target for MIMO radar systems. It splits the estimation problem into two parts. The first part estimates the reflection coefficient in closed form. The second part jointly estimates the spatial location and Doppler shift using a 2D FFT approach. This allows significantly lower computational complexity compared to maximum likelihood estimation. Simulation results show the proposed estimator achieves the Cramér-Rao lower bound, providing optimal performance with low complexity.
ambiguity in geophysical interpretation_dileep p allavarapuknigh7
National Geophysical Research Institute- Gravity student Ideas about Ambiguity in Gravity, Magnetic or any Geophysical interpretation_dileep p allavarapu
https://www.facebook.com/photo.php?fbid=596783223719441&set=a.142976269100141.29680.100001633067896&type=1&theater
Advanced Approach for Slopes Measurement by Non - Contact Optical TechniqueIJERA Editor
This document describes an advanced non-contact optical technique for measuring slopes. It introduces a numerical computation method to acquire surface shapes using optical moiré reflection. The method uses coherent illumination and fine pitch gratings to project a reference grating onto a surface and observe interference fringes. The sensitivity and accuracy of this slope measurement method is high. Equations are derived that relate the measured slopes to the optical and geometric parameters of the system.
1. The document discusses multiple integral calculus topics including double integrals over rectangles and general regions, double integrals in polar coordinates, triple integrals in rectangular, cylindrical and spherical coordinates, and change of variables in multiple integrals.
2. It provides definitions, properties, examples and solutions for calculating double and triple integrals using rectangular, polar, cylindrical and spherical coordinate systems.
3. Formulas are given for calculating double and triple integrals in polar, cylindrical and spherical coordinates by representing the volume element and limits of integration in terms of the respective coordinate systems.
One of many important generalizations of ordinary Voronoi diagrams is the higher-order Voronoi diagram. The order-k Voronoi diagram is the partitioning of the plane into regions, such that each point within a fixed region has the same k nearest sites. Many algorithms have been developed that construct the higher-order Voronoi diagram of point-sites. In this talk we will discuss randomized algorithms that can be used for a larger class of sites—specifically, polygonal objects and the abstract setting. We describe the algorithms in combinatorial rather than geometric terms, which makes it possible to construct higher-order Voronoi diagrams that have bisectors satisfying certain combinatorial properties.
Analysis of coupled inset dielectric guide structureYong Heui Cho
This document analyzes the propagation and coupling characteristics of inset dielectric guide couplers theoretically and through numerical computations. It presents rigorous solutions for the dispersion relation and coupling coefficient in rapidly convergent series. Computations illustrate the effects of frequency and geometry on dispersion, coupling, and field distribution. The analysis confirms the validity of a dominant mode approximation and shows good agreement with previous work on coupled inset dielectric guides.
This document summarizes a research paper about designing beampatterns for MIMO radar systems using a covariance-based method while accounting for the locations of transmitter antennas. It discusses how changing antenna locations is equivalent to changing carrier frequency. The paper proposes optimizing two cost functions: 1) Pushing sidelobes away from the main lobe to minimize interference, and 2) Maximizing power around target locations without extra sidelobes to improve target detection. It formulates these cost functions and outlines an algorithm to optimize them using the cross-correlation matrix and antenna locations as design variables.
Similar to 11.[23 36]quadrature radon transform for smoother tomographic reconstruction (20)
Abnormalities of hormones and inflammatory cytokines in women affected with p...Alexander Decker
Women with polycystic ovary syndrome (PCOS) have elevated levels of hormones like luteinizing hormone and testosterone, as well as higher levels of insulin and insulin resistance compared to healthy women. They also have increased levels of inflammatory markers like C-reactive protein, interleukin-6, and leptin. This study found these abnormalities in the hormones and inflammatory cytokines of women with PCOS ages 23-40, indicating that hormone imbalances associated with insulin resistance and elevated inflammatory markers may worsen infertility in women with PCOS.
A usability evaluation framework for b2 c e commerce websitesAlexander Decker
This document presents a framework for evaluating the usability of B2C e-commerce websites. It involves user testing methods like usability testing and interviews to identify usability problems in areas like navigation, design, purchasing processes, and customer service. The framework specifies goals for the evaluation, determines which website aspects to evaluate, and identifies target users. It then describes collecting data through user testing and analyzing the results to identify usability problems and suggest improvements.
A universal model for managing the marketing executives in nigerian banksAlexander Decker
This document discusses a study that aimed to synthesize motivation theories into a universal model for managing marketing executives in Nigerian banks. The study was guided by Maslow and McGregor's theories. A sample of 303 marketing executives was used. The results showed that managers will be most effective at motivating marketing executives if they consider individual needs and create challenging but attainable goals. The emerged model suggests managers should provide job satisfaction by tailoring assignments to abilities and monitoring performance with feedback. This addresses confusion faced by Nigerian bank managers in determining effective motivation strategies.
A unique common fixed point theorems in generalized dAlexander Decker
This document presents definitions and properties related to generalized D*-metric spaces and establishes some common fixed point theorems for contractive type mappings in these spaces. It begins by introducing D*-metric spaces and generalized D*-metric spaces, defines concepts like convergence and Cauchy sequences. It presents lemmas showing the uniqueness of limits in these spaces and the equivalence of different definitions of convergence. The goal of the paper is then stated as obtaining a unique common fixed point theorem for generalized D*-metric spaces.
A trends of salmonella and antibiotic resistanceAlexander Decker
This document provides a review of trends in Salmonella and antibiotic resistance. It begins with an introduction to Salmonella as a facultative anaerobe that causes nontyphoidal salmonellosis. The emergence of antimicrobial-resistant Salmonella is then discussed. The document proceeds to cover the historical perspective and classification of Salmonella, definitions of antimicrobials and antibiotic resistance, and mechanisms of antibiotic resistance in Salmonella including modification or destruction of antimicrobial agents, efflux pumps, modification of antibiotic targets, and decreased membrane permeability. Specific resistance mechanisms are discussed for several classes of antimicrobials.
A transformational generative approach towards understanding al-istifhamAlexander Decker
This document discusses a transformational-generative approach to understanding Al-Istifham, which refers to interrogative sentences in Arabic. It begins with an introduction to the origin and development of Arabic grammar. The paper then explains the theoretical framework of transformational-generative grammar that is used. Basic linguistic concepts and terms related to Arabic grammar are defined. The document analyzes how interrogative sentences in Arabic can be derived and transformed via tools from transformational-generative grammar, categorizing Al-Istifham into linguistic and literary questions.
A time series analysis of the determinants of savings in namibiaAlexander Decker
This document summarizes a study on the determinants of savings in Namibia from 1991 to 2012. It reviews previous literature on savings determinants in developing countries. The study uses time series analysis including unit root tests, cointegration, and error correction models to analyze the relationship between savings and variables like income, inflation, population growth, deposit rates, and financial deepening in Namibia. The results found inflation and income have a positive impact on savings, while population growth negatively impacts savings. Deposit rates and financial deepening were found to have no significant impact. The study reinforces previous work and emphasizes the importance of improving income levels to achieve higher savings rates in Namibia.
A therapy for physical and mental fitness of school childrenAlexander Decker
This document summarizes a study on the importance of exercise in maintaining physical and mental fitness for school children. It discusses how physical and mental fitness are developed through participation in regular physical exercises and cannot be achieved solely through classroom learning. The document outlines different types and components of fitness and argues that developing fitness should be a key objective of education systems. It recommends that schools ensure pupils engage in graded physical activities and exercises to support their overall development.
A theory of efficiency for managing the marketing executives in nigerian banksAlexander Decker
This document summarizes a study examining efficiency in managing marketing executives in Nigerian banks. The study was examined through the lenses of Kaizen theory (continuous improvement) and efficiency theory. A survey of 303 marketing executives from Nigerian banks found that management plays a key role in identifying and implementing efficiency improvements. The document recommends adopting a "3H grand strategy" to improve the heads, hearts, and hands of management and marketing executives by enhancing their knowledge, attitudes, and tools.
This document discusses evaluating the link budget for effective 900MHz GSM communication. It describes the basic parameters needed for a high-level link budget calculation, including transmitter power, antenna gains, path loss, and propagation models. Common propagation models for 900MHz that are described include Okumura model for urban areas and Hata model for urban, suburban, and open areas. Rain attenuation is also incorporated using the updated ITU model to improve communication during rainfall.
A synthetic review of contraceptive supplies in punjabAlexander Decker
This document discusses contraceptive use in Punjab, Pakistan. It begins by providing background on the benefits of family planning and contraceptive use for maternal and child health. It then analyzes contraceptive commodity data from Punjab, finding that use is still low despite efforts to improve access. The document concludes by emphasizing the need for strategies to bridge gaps and meet the unmet need for effective and affordable contraceptive methods and supplies in Punjab in order to improve health outcomes.
A synthesis of taylor’s and fayol’s management approaches for managing market...Alexander Decker
1) The document discusses synthesizing Taylor's scientific management approach and Fayol's process management approach to identify an effective way to manage marketing executives in Nigerian banks.
2) It reviews Taylor's emphasis on efficiency and breaking tasks into small parts, and Fayol's focus on developing general management principles.
3) The study administered a survey to 303 marketing executives in Nigerian banks to test if combining elements of Taylor and Fayol's approaches would help manage their performance through clear roles, accountability, and motivation. Statistical analysis supported combining the two approaches.
A survey paper on sequence pattern mining with incrementalAlexander Decker
This document summarizes four algorithms for sequential pattern mining: GSP, ISM, FreeSpan, and PrefixSpan. GSP is an Apriori-based algorithm that incorporates time constraints. ISM extends SPADE to incrementally update patterns after database changes. FreeSpan uses frequent items to recursively project databases and grow subsequences. PrefixSpan also uses projection but claims to not require candidate generation. It recursively projects databases based on short prefix patterns. The document concludes by stating the goal was to find an efficient scheme for extracting sequential patterns from transactional datasets.
A survey on live virtual machine migrations and its techniquesAlexander Decker
This document summarizes several techniques for live virtual machine migration in cloud computing. It discusses works that have proposed affinity-aware migration models to improve resource utilization, energy efficient migration approaches using storage migration and live VM migration, and a dynamic consolidation technique using migration control to avoid unnecessary migrations. The document also summarizes works that have designed methods to minimize migration downtime and network traffic, proposed a resource reservation framework for efficient migration of multiple VMs, and addressed real-time issues in live migration. Finally, it provides a table summarizing the techniques, tools used, and potential future work or gaps identified for each discussed work.
A survey on data mining and analysis in hadoop and mongo dbAlexander Decker
This document discusses data mining of big data using Hadoop and MongoDB. It provides an overview of Hadoop and MongoDB and their uses in big data analysis. Specifically, it proposes using Hadoop for distributed processing and MongoDB for data storage and input. The document reviews several related works that discuss big data analysis using these tools, as well as their capabilities for scalable data storage and mining. It aims to improve computational time and fault tolerance for big data analysis by mining data stored in Hadoop using MongoDB and MapReduce.
1. The document discusses several challenges for integrating media with cloud computing including media content convergence, scalability and expandability, finding appropriate applications, and reliability.
2. Media content convergence challenges include dealing with the heterogeneity of media types, services, networks, devices, and quality of service requirements as well as integrating technologies used by media providers and consumers.
3. Scalability and expandability challenges involve adapting to the increasing volume of media content and being able to support new media formats and outlets over time.
This document surveys trust architectures that leverage provenance in wireless sensor networks. It begins with background on provenance, which refers to the documented history or derivation of data. Provenance can be used to assess trust by providing metadata about how data was processed. The document then discusses challenges for using provenance to establish trust in wireless sensor networks, which have constraints on energy and computation. Finally, it provides background on trust, which is the subjective probability that a node will behave dependably. Trust architectures need to be lightweight to account for the constraints of wireless sensor networks.
This document discusses private equity investments in Kenya. It provides background on private equity and discusses trends in various regions. The objectives of the study discussed are to establish the extent of private equity adoption in Kenya, identify common forms of private equity utilized, and determine typical exit strategies. Private equity can involve venture capital, leveraged buyouts, or mezzanine financing. Exits allow recycling of capital into new opportunities. The document provides context on private equity globally and in developing markets like Africa to frame the goals of the study.
This document discusses a study that analyzes the financial health of the Indian logistics industry from 2005-2012 using Altman's Z-score model. The study finds that the average Z-score for selected logistics firms was in the healthy to very healthy range during the study period. The average Z-score increased from 2006 to 2010 when the Indian economy was hit by the global recession, indicating the overall performance of the Indian logistics industry was good. The document reviews previous literature on measuring financial performance and distress using ratios and Z-scores, and outlines the objectives and methodology used in the current study.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Infrastructure Challenges in Scaling RAG with Custom AI modelsZilliz
Building Retrieval-Augmented Generation (RAG) systems with open-source and custom AI models is a complex task. This talk explores the challenges in productionizing RAG systems, including retrieval performance, response synthesis, and evaluation. We’ll discuss how to leverage open-source models like text embeddings, language models, and custom fine-tuned models to enhance RAG performance. Additionally, we’ll cover how BentoML can help orchestrate and scale these AI components efficiently, ensuring seamless deployment and management of RAG systems in the cloud.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
20240605 QFM017 Machine Intelligence Reading List May 2024
11.[23 36]quadrature radon transform for smoother tomographic reconstruction
1. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.4, 2012
Quadrature Radon Transform for Smoother Tomographic
Reconstruction
P. K. Parlewar (Corresponding author)
Ramdeobaba College of Engineering and Management
Gitti Khadan, Nagpur
Tel. : +91- 9665441651 E-mail : pallavi.parlewar10@gmail.com
K. M. Bhurchandi
Visvesvarya National Institute of Technology, Nagpur
Tel. : +91- 9822939421 E-mail : bhruchandikm@ece.vnit.ac.in
Abstract
Tomographic reconstruction using Radon projections taken at an angle θ[0,2π) introduce a redundancy of
four. Hence the projection in one quadrant θ [0,π/2) are used for tomographic reconstruction to reduce
computational overheads. In this paper, we present that the though the projections at an angle θ [0,π/2) do
not introduce any redundancy, the achieved tomographic reconstruction is very poor. A qudrature Radon
transform is further introduced as a combination of Radon transforms at projection angles θ and (θ+π/2).
The individual back projections for θ and (θ+π/2) are computed in two parts separately; 1) considering
them real and imaginary parts of a complex number 2) considering average of the individual back
projections. It is observed that the quadrature Radon transform yields better results numerically and/or
visually compared to conventional Radon transform θ[0,2π).
Keywords: Single quadrant Radon transform, quadrature Radon transform, Magnitude of complex
projection, average, reconstruction.
1. Introduction
Tomography refers to the cross section imaging of an object from either transmission or reflection data
collected by illuminating the object from many different directions one by one. In other words,
tomographic imaging deals with reconstructing an image from its projections as presented by S. Chandra
Kutter, et. al (2010). The exact reconstruction of a signal requires an infinite number of projections, since
an infinite number of slices are required to include all of Fourier spaces. If however the signal have
mathematical form then exact reconstruction is possible from limited set of projections as given by
Meseresu and Oppenheim (1974). The first practical solution of image reconstruction was given by
Bracewell (1956) in the field of radio astronomy. He applied tomography to map the region of emitted
microwave radiation from the sun’s disk. The most popular application is associated with medical imaging
by use of computerized tomography (CT) in which the structure of a multidimensional object is
reconstructed from set of its 2D or 3D projections (Grigoryan 2003). X-ray of human organs in which stack
of several transverse projections are used to get the two dimension information which may be converted to
three dimensions.
The selection of the projection set has attracted the researchers in various applications. Svalbe and
Kingston (2003) presented selection of projection angles for discrete Radon transform from the known
Farey fractions. It has been predicted in that all the Farey sequence points cannot be represented on Radon
projections. This leads to a fact that the pixels in the annular and corner regions of an image tile are poorly
represented in the projection domain with single set of projections. They also presented a generalized finite
23
2. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.4, 2012
Radon transform computation algorithm for NxN images using prime number size square tiles (Svalbe and
Kingston (2007). The radon transform and its derivatives are useful for many image processing application
such as denoising, locating linear features, detecting and isolating motion (Donoho 1997,Candes 1998,
Candes and Donoho 1999) , Pattern recognition(Chen et. al (2005), image compression , and feature
representation (Donoho and Vetterli 2003). Since all these applications of the Radon transform are in
essence discrete, an accurate discrete formalism of the Radon transform is required. It will minimize the
need for interpolating the projections and reconstructions. Though conceptualized in polar domain Finite
Radon transform (FRT) is almost orthogonal (Donoho and Vetterli 2003). The FRT applies to square image
data pxp where p is a small prime and assume the image is periodic with p in both the x and y direction, by
defining the pxp array as the finite group Zp2 under addition. Both the Fourier slice theorem and
convolution property hold for the FRT. Since p is prime there are only (p+1) distinct line directions (thus
p+1 projections) corresponding to the p+1 unique subgroups. Every radial discrete line samples p points in
the image and has p translates. It intersects any other distinct line only once. All these attributes match
those of lines in continuous space as given by Kingston (2007).
In this paper, we consider the possibility of smoother reconstruction from a finite number of projections of
the proposed quadrature Radon transform using two approaches. In the first approach, the two set of
projections are taken from angle θ and (θ+π/2) simultaneously and those two set of projections are
considered as a real and imaginary part of a complex number. In another one, the individual back
projections are averaged to yield smooth reconstruction. The proposed approaches yield better results
compared to the single quadrant [0,π /2] Radon transform. Section 1 presents literature survey and
organization of the paper. Section 2 explains forward Radon transform and its inverse i.e. back projection
theorem with the help of Fourier slice theorem and Filter back projection. The anticipated Quadrature
Radon Transform (QRT), its property and algorithm is given in section 3. Experiments and results are
given in section 4. Finally conclusion is presented in section 5.
2. Radon Transform
Let (x, y) be coordinates of points in the plane. Consider an arbitrary integrable function f defined on
domain D of real space R2. If L is any line in the plane, then set projections or finite line integrals of f along
all possible lines L is defined as two dimensional Radon transform of f (Matus and Flusser 1993, Herman
and Davidi 2008, Herman 1980) . Thus,
∨
f = R f = ∫ f ( x, y ) ds (1)
L
Where ds is an increment of length along L. The domain D may include the entire plane or some region of
the plane as shown in Fig. 1. If f is continuous and has compact support then Rf is uniquely determined by
integrating along all lines L. Obviously the discrete integral along line L, is the summation of the pixel
intensities of D that fall on line L and is called a point projection. A set of point projections, for all lines
parallel to L and with a fixed angle and spanning over D, is called a projection.
24
3. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.4, 2012
Fig. 1 Projection line L through domain D
In Fig.2 the projection line is considered perpendicular to a dotted line that passes through origin and
intersects x axis at angle θ. The perpendicular distance between the line and the origin is p. A set of
projections obtained due to integrals along all parallel projection lines perpendicular to the dotted line is
called projection of D at an angle θ. Thus by changing the angle of the dotted line θ with x axis over the
range [0 π/2), parallel projections for each value of θ can be obtained. This set of projections is called
projections of D over the first quadrant. Similarly one can have projections of D over the remaining three
quadrants.
Fig. 2 Coordinates to describe the line in Fig. 1
The line integral depends on the values of p and Ф.
∨
f ( p, φ ) = R f = ∫ f ( x, y ) ds (2)
L
25
4. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.4, 2012
∨ ∨
Thus, if f ( p, φ ) is finite for all p and Ф, then f ( p, φ ) is the two dimensional Radon transform of f(x, y).
Now suppose a new coordinate system is introduced with axes rotated by an angle Ф, . If the new axes are
labeled p and s as in Fig. 3, x and y can be re presented in terms of p and s using the shifting of reference
mathematics as in (3).
x = p cos φ − s sin φ
(3)
y = p sin φ + s cos φ
A somewhat clearer form of Radon transform can now be presented as in (4).
∨ ∞
f ( p, ϕ ) = ∫ f ( p cos φ − s sin φ , p sin ϕ + s cos φ )ds
−∞
()()w
Fig. 3 The line in Fig. 2 relative to original and rotated coordinates.
Thus using (3) p may written as (5), if ξ is a unit vector in the direction of p.
p = ξ .x = x cos φ + y sin φ (5)
The parallel lines can be visualized as translated version of L using a translated dirac delta function δ (p-
ξx) along axis p as its multiplication with L. Then the transform may be written as an line integral over R2
by allowing the dirac delta function to select the line (5) from R2. Thus by modifying the above (4) we
obtain (6)
∨
f ( p, ξ ) = ∫ f ( x) δ ( p − ξ ⋅ x ) dx (6)
26
5. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.4, 2012
Where ξ defines direction in terms of the angle φ . The (x, y) space for a fixed angle φ and the variable p
changes along the direction defined by ξ .
2.1 Fourier slice theorem
Fourier slice theorem (FST) explains the reconstruction of the object from the projection data. Fourier slice
theorem is derived by taking the one dimension Fourier transform of the parallel projections and noting that
it is equal to the slices of the two dimensions Fourier transform of the object. The projection data should
estimate the object using two dimensional inverse Fourier transform(Matus 1993, Portilla 2003, Hsung et.
al 1996).
The simplest form of the Fourier slice theorem which is independent of the orientation between the object
and the coordinate system is diagrammatically presented in Fig.4
Fig. 4 Fourier Slice Theorem
In Fig (4) the (x,y) coordinate system is rotated by an angle θ. The FFT of the projection is equal to the 2-D
FFT of the object slice along a line rotated by θ. Thus the FST states that, the Fourier transform of parallel
projection of an image f(x, y) taken at an angle θ gives a slice of the 2-D transform, subtending an angle θ
with the u-axis. In other words one dimensional FT of set of projections gives the value of two dimensional
FT along lines BB in Fig. 4.
2.2. Filtered Back Projection
Filter back projection has two steps; the filtering part, which can be visualized as a simple weighting of
each projection in the frequency domain, and the back projection part which is equivalent to finding the
wedge shaped elemental reconstructions as presented by Hsung et. al (1996). Original image can be
reconstructed exactly using the projections by applying filter and then taking the back projections (Kak and
Slaney (2001). The process of filtered back projection can be explained as below
1) Apply a weighted filter, to the set of projections to obtained filtered projections.
27
6. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.4, 2012
2) Take back projections to obtain the exact reconstruction of the original image i.e. inverse radon
transform algorithm
In the process of back projection, the filtered projection at a point t makes the same contribution to all
pixels along the line L in the x-y plane. The resulting projections for different angles θ are added to form
estimate of f(x, y). To every point(x, y) in the image plane there corresponds a value t (= x cosθ + y sinθ) .
This is shown from Fig. 5. It is easily seen that for the indicated angle θ, the value of t is the same for all
(x,y) on the line L. Therefore, the filtered projection, Qθ , will contribute equally to the reconstruction
process. Thus each filtered projection Qθ, is smeared back, or backprojected , over the image plane.
Fig.5 Reconstruction using back projection.
In Fig. 5a filtered projection is shown smeared back over the reconstruction plane along a line of constant t.
3. Quadrature Radon Transform
In most of the orthogonal image processing algorithms we consider image f(x, y) as a separable function i.e.
f(x, y) = f(x) * f(y) . Thus we first process the image along rows and then along columns i.e only in x and y
direction and anticipate that the processing in x and y directions enough to approximate the processing all
over the image in all possible directions. Obviously processing of a 2D variable will not fetch desired
results with processing only in one direction. Radon transform of a 2D variable for angles [0,π/2) is
processing of the variable along line projections only in the first quadrant. Here we propose the Quadrature
Radon transform, that takes projections on a line at an angle θ and also along a line perpendicular to it i.e.
at (θ+π/2). Thus two sets of projections at right angles to each other are obtained. Let the continuous object
f be approximated digitally within a four quadrant lattice and every quadrant is a digital array of lines, then
the lines in the first quadrant and the third quadrant are parallel to each other. Similarly the projecton lines
in the first and the third quadrant are parallel to each other. Obviously two sets of parallel projection lines
on a discrete space is going to yield the same projection results. Thus two of the four quadrant projection
sets may be the third and fourth quadrant are redundant and can be eliminated. Thus the projections in the
first and the second quadrant have been considered for reconstruction using back projection algorithm. Eq.
(7) and (8) present the sets of projections in the first and the second quadrant. The pixel array of an input
28
7. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.4, 2012
image results in two sets of projections; the first one for the first quadrant [0,π/2) and the second one for the
second quadrant [π/2,π). Both the sets of projections are of equal size as that of the input pixel array. Thus,
redundancy of only scale two is introduced as against four in case of Radon transform in four quadrants
[0,2π).
∨
∨
f 1 f :θ 1 ∈ [0, π / 2) = ∫ f 1 ( x) δ ( p − ξ ⋅ x ) dx (7)
∨
∨
f 2 f :θ 2 ∈ [π / 2, π ) = ∫ f 2 ( x) δ ( p − ξ ⋅ x ) dx (8)
In the first approach, the individual set of projections in each quadrant are considered real and imaginary
parts of a complex entity. Thus (9) represents the proposed Complex Radon transform.
∨ ∨ ∨
f = f1 + j f 2
(9)
If FBP represents filtered back projections of f, (10) and (11) represent the individual reconstructed
versions of f using the filtered back projections.
∨
f1 = FBP( f 2 ) (10)
∨
f 2 = FBP( f 2 ) (11)
Let f be the reconstructed signal using projections in both the quadrants. Using the first approach, it will
be represented by (12). The reconstruction using the second approach is presented by (13).
1
f= f12 + f12 (12)
2
In the second approach, the individual set of projections is considered independent. The first quadrant set of
projections is considered in phase and the second quadrant set of projections is considered a quadrature
component. Thus (7) and (8) jointly represent the proposed quadrature Radon transform.
While computing the inverse Radon transform, the back projection algorithm is applied on the two
components resulting from (7) and (8) individually. (10) and (11) represent the sets of back projections i.e.
the respective inverse Radon transforms.
29
8. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.4, 2012
f1 + f 2
f= (13)
2
Theoretically, both should result in perfect reconstruction (Chen et. al. 2007) individually. But it is obvious
that, a single set of projections in only one quadrant preserves the inter pixel relations only in one direction;
either along rows or columns. Also, it poorly represents the corner pixels(Svalbe and Kingston 2003) . As a
result the reconstruction using projections only in one quadrant is not accurate and smooth. The projections
in more than one quadrant, rather at right angles to the first one will represent and preserve the inter pixel
relations more accurately. The corner pixels’ representation will also be improved as a result of averaging
in spatial domain. Thus better tomographic reconstruction of the input image is obtained.
In the first approach, the resultant back projection is computed as magnitude of the complex number as
presented in(12). In this approach, a normalization by a factor of (2-1/2) is required to display the
reconstructed image using 8 bit graphics. In the second approach, the resultant back projection is computed
as average of the two individual back projections as presented in (13 ). In case of both the approaches the
reconstructed images are better than the reconstruction from single quadrant projections.
4. Experiments
Initially, Radon transform of the gray and color test images have been computed using projections in the
first quadrant i.e. from 0 to 89 degrees. Further, inverse Radon transform of the projection results has been
taken using back projection theorem with different filters like ‘bilinear’, ‘bicubic’ etc to reconstruct original
images. MSE and PSNR of the reconstruction have been computed. The reconstructed images are also
observed visually. Further, radon transform of the same images were computed in the second quadrant
using projections from 90 degrees to 179degrees. Original images are reconstructed using the projections in
the second quadrant using different filtered back projections. MSE and PSNR of the reconstructed images
using projections in the second quadrant are also computed. Using the first approach, the reconstructed
image using the first quadrant and the second quadrant projections are then considered real and imaginary
part of the reconstructed images respectively. The resultant magnitudes of the reconstructed images are
computed using (12). MSE and PSNR of thus reconstructed images have been computed. In the second
approach, average of the two reconstructed images using the first and second quadrant projections has been
taken as final reconstructed image. MSE and PSNR of the reconstructed average image has been computed.
These experiments have been repeated for several images but a few representative results and images
reconstructed using the discussed and proposed algorithms have been presented in the next section.
5. Results
Results of the proposed forward and inverse Radon transform have been benchmarked with that of the
Radon transform in the first quadrant i.e. [0, π /2). Table I presents MSE and PSNR of reconstructions with
single quadrant and both quadrant projections on different color images using various filters before back
projection.
30
9. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.4, 2012
Table 1 : Comparative performance of Single quadrant Radon reconstruction and the proposed Quadrature
Radon Trans.
Quadrature Quadrature
Single
Radon Radon
Quadrant
Image Filter (Complex) (Average)
PSNR PSNR PSNR
MSE MSE MSE
(db) (db) (db)
Nearest 110.95 27.67 24.43 34.25 58.09 30.48
Linear 110.96 27.67 23.36 34.44 54.19 30.79
Peppers
Spline 110.84 27.68 22.06 34.69 52.54 30.92
Cubic 110.95 27.67 24.43 34.25 58.09 30.48
Nearest 114.75 27.53 38.46 32.28 75.29 29.36
Linear 114.54 27.54 38.33 32.41 73.84 29.44
Pears
Spline 114.30 27.55 38.68 32.60 71.54 29.58
Cubic 114.36 27.54 38.40 32.51 72.53 29.52
Nearest 112.46 27.62 23.36 34.44 57.13 30.56
Linear 112.17 27.63 23.90 34.53 54.92 30.73
Football
Spline 112.04 27.64 23.87 34.73 53.80 30.82
Cubic 112.09 27.63 23.30 34.64 54.31 30.78
Nearest 92.43 28.47 47.84 31.33 79.33 29.13
Linear 92.99 28.49 47.24 31.38 77.33 29.24
Winters
Spline 92.15 28.48 46.10 31.49 76.72 29.28
Cubic 92.01 28.49 47.80 31.42 77.28 29.24
Nearest 122.81 27.23 64.71 30.02 101.4 28.06
Linear 122.16 27.26 62.49 30.17 98.02 28.21
Baboon
Spline 122.02 27.27 60.57 30.30 96.75 28.27
Cubic 122.15 27.26 61.72 30.22 97.68 28.23
It is quiet obvious from Table I, that MSE for reconstruction using single quadrant is highest and
accordingly PSNR is lowest as presented in column 3 and 4. In case of reconstructions using projections in
the two quadrants and considering the individual back projected reconstructions as real and imaginary
parts, the MSE is quiet low and PSNR is comparatively high as presented in column 5 and 6. Using average
of the back projections in the two quadrants, the MSE and PSNR are moderate for all the reconstructed
images. Thus numerically the reconstruction using single quadrant is poorest while using the first approach
is the best.
31
10. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.4, 2012
Fig, 6 Comparative Performance as in Table I, of the discussed techniques in bar chart form
Fig.7.(a) Fig.7.(b)
Fig.7.(c) Fig.7.(d )
Fig.7. (a) Peppers (b) Reconstructed using single Quadrant Radon projections (c) Reconstructed using
Radon Complex number magnitude (d) Reconstructed using average of Quadrature Radon back projections
32
11. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.4, 2012
Fig.8.(a) Fig.8.(b)
Fig.8.(c) Fig.8.(d)
Fig.8. (a) Football (b) Reconstructed using single Quadrant Radon projections (c) Reconstructed using
Radon Complex number magnitude (d) Reconstructed using average of Quadrature Radon back projections
33
12. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.4, 2012
Fig.9.a Fig.9.b
Fig.9.c Fig.9.d
Fig.9.(a)Baboon (b)Reconstructed using single Quadrant Radon projections (c) Reconstructed using
Radon-Complex magnitude (d) Reconstructed using average of Quadrature Radon back projections
34
13. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.4, 2012
Fig.10.(a) Fig.10.(b)
Fig.10.(c) Fig.10. (d)
Fig.6. (a) Peppers (b) Reconstructed using single Quadrant Radon projecttions (c) Reconstructed using
Radon Complex number magnitude (d) Reconstructed using average of Quadrature Radon back
projections
Fig. 6 presents the comparison of performance of the different techniques presented in Table 1 in the form
of a bar chart. Fig. 7(a), Fig. 8(a), Fig. 9(a) and Fig 10.(a) present original color images Peppers, Football,
Baboon and Winters. The images were transformed using single quadrant Radon projections and
reconstructed using the same filtered back projections. Fig. 7(b), 8(b), 9(b) and 10 (b) present the results of
reconstruction using the single quadrant projections. It is obvious that all these images are quiet distorted
and hence unacceptable. Further projections were computed for the second quadrant. Reconstruction was
done using (12) after taking back projections. The presented visual results in fig 7(c), 8(c), 9(c) and 10(c)
indicate that the reconstructions using this approach are also unacceptable though numerically the MSE and
PSNR are the best as presented in table 1. In the second approach, average of the two back projections was
computed as in (13). It is observed that, though the MSE and PSNR are moderate, these results presented in
Fig. 7(d), 8(d), 9(d) and 10(d) are the best visually.
6. Conclusion
Thus reconstruction using single quadrant Radon projections [0,π/2) are not enough for accurate
reconstruction. Both the proposed approaches yield numerically and visually better results compared to
35
14. Computer Engineering and Intelligent Systems www.iiste.org
ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online)
Vol 3, No.4, 2012
reconstruction from only the projections in the first quadrant. The first approach of taking the magnitude of
the two individual back projections considering them real and imaginary part of a complex number yields
numerically better but visually poor results. However the second approach of taking average of the two
individual back projections yields numerically moderate but visually the best results.
References
S. Chandra, N. Normand, N. Kingston, J. Guedon and I. Svalbe, (2010) “ Fast Mojette transform for
discrete tomography,” physics .med-ph, arXiv.org. arXiv: 1006.1965v1 1. preprint 11.
R. M. Mersereau and A.V. Oppenheim, (1974) “Digital reconstruction of multidimensional signals from
their projections,” in Proc. IEEE, 62, 1319–1338.
A. M. Grigoryan,( 2003) “Method of paired transform for reconstruction of images from projections:
discrete model Image processing,” IEEE Trans. Image processing, 12, 985-994.
R. N. Bracewell, (1956) “Strip integration in radioastronomy,” Aust. J. Phys., 9, 198-217.
I. Svalbe and A. Kingston, (2003) “Farey sequences and discrete Radon transform projection angles,”
Electronics Notes in Discrete Mathematics, 12, 1-12.
A. Kingston and I. Svalbe, (2007 ) “Generalized finite radon transform for NxN images,” Image and vision
computing, 25, 1620-1630.
D. L. Donoho, (1997) “Fast ridgelet transforms in dimension 2,” in Stanford Univ. Dept. of Statistics,
Stanford, CA, Tech. Rep.
E. J. Candes, (1998) “Ridgelets : Theory and applications,” Ph.D.Thesis, Department of statistics,
Stanford University.
E. J. Candes and D. L. Donoho, (1999) “Ridgelets: a key to higher-dimensional intermittency?” Phil. Trans.
Roy. Soc. London, 375, 2495-2509.
G. Y. Chen, T. D. Bui and A. Krzyzak, (2005) “Rotation invariant pattern recognition using Ridgelet,
Wavelet cycle-spinning and Fourier features,” Pattern Recognition 38, 2314-2322.
M. N. Do and M. Vetterli, (2003) “The finite Ridgelet transform for image representation,” IEEE Trans.
Image Processing, 12, 16-28.
F. Matus and J. Flusser, (1993) “Image representation via a finite Radon transform,” IEEE Trans. Pattern
Analysis and Machine Intelligence, 15, 996-1006.
G. T. Herman and R. Davidi , (2008) “Image reconstruction from a small number of projections,” IOP
publishing, Inverse Problems, 24, 1-17.
G. T. Herman, (1980) Image reconstruction from projections. The fundamentals of computerized
tomography. Newyork: Academic.
J Portilla, (2003) “Image Denoising using Scale Mixture of Gaussians in Wavelet domain”, IEEE Trans.
Image Process., 12, 11
T. Hsung, D. P. K. Lun and Wan-Chi Siu, (1996) “The discrete periodic Radon transform,” IEEE Trans.
Signal processing, 44, 2651-2657.
G. Y. Chen and B. Kegl, (2007) “Complex Ridgelets for Image Denoising” Pattern Recg. ELSEVIER,
Science direct, 40, 578-585.
Kak and Slaney (2001), “Principals of computerized tomographic imaging,” Society of Industrial and
Applied Mathematics, pp. 1-86.
36
15. International Journals Call for Paper
The IISTE, a U.S. publisher, is currently hosting the academic journals listed below. The peer review process of the following journals
usually takes LESS THAN 14 business days and IISTE usually publishes a qualified article within 30 days. Authors should
send their full paper to the following email address. More information can be found in the IISTE website : www.iiste.org
Business, Economics, Finance and Management PAPER SUBMISSION EMAIL
European Journal of Business and Management EJBM@iiste.org
Research Journal of Finance and Accounting RJFA@iiste.org
Journal of Economics and Sustainable Development JESD@iiste.org
Information and Knowledge Management IKM@iiste.org
Developing Country Studies DCS@iiste.org
Industrial Engineering Letters IEL@iiste.org
Physical Sciences, Mathematics and Chemistry PAPER SUBMISSION EMAIL
Journal of Natural Sciences Research JNSR@iiste.org
Chemistry and Materials Research CMR@iiste.org
Mathematical Theory and Modeling MTM@iiste.org
Advances in Physics Theories and Applications APTA@iiste.org
Chemical and Process Engineering Research CPER@iiste.org
Engineering, Technology and Systems PAPER SUBMISSION EMAIL
Computer Engineering and Intelligent Systems CEIS@iiste.org
Innovative Systems Design and Engineering ISDE@iiste.org
Journal of Energy Technologies and Policy JETP@iiste.org
Information and Knowledge Management IKM@iiste.org
Control Theory and Informatics CTI@iiste.org
Journal of Information Engineering and Applications JIEA@iiste.org
Industrial Engineering Letters IEL@iiste.org
Network and Complex Systems NCS@iiste.org
Environment, Civil, Materials Sciences PAPER SUBMISSION EMAIL
Journal of Environment and Earth Science JEES@iiste.org
Civil and Environmental Research CER@iiste.org
Journal of Natural Sciences Research JNSR@iiste.org
Civil and Environmental Research CER@iiste.org
Life Science, Food and Medical Sciences PAPER SUBMISSION EMAIL
Journal of Natural Sciences Research JNSR@iiste.org
Journal of Biology, Agriculture and Healthcare JBAH@iiste.org
Food Science and Quality Management FSQM@iiste.org
Chemistry and Materials Research CMR@iiste.org
Education, and other Social Sciences PAPER SUBMISSION EMAIL
Journal of Education and Practice JEP@iiste.org
Journal of Law, Policy and Globalization JLPG@iiste.org Global knowledge sharing:
New Media and Mass Communication NMMC@iiste.org EBSCO, Index Copernicus, Ulrich's
Journal of Energy Technologies and Policy JETP@iiste.org Periodicals Directory, JournalTOCS, PKP
Historical Research Letter HRL@iiste.org Open Archives Harvester, Bielefeld
Academic Search Engine, Elektronische
Public Policy and Administration Research PPAR@iiste.org Zeitschriftenbibliothek EZB, Open J-Gate,
International Affairs and Global Strategy IAGS@iiste.org OCLC WorldCat, Universe Digtial Library ,
Research on Humanities and Social Sciences RHSS@iiste.org NewJour, Google Scholar.
Developing Country Studies DCS@iiste.org IISTE is member of CrossRef. All journals
Arts and Design Studies ADS@iiste.org have high IC Impact Factor Values (ICV).