This document presents a new method called the Real-valued Iterative Adaptive Approach (RIAA) for estimating the power spectral density of nonuniformly sampled data. It aims to improve upon the periodogram, which suffers from poor resolution and leakage. RIAA is an iteratively weighted least squares periodogram that uses an adaptive weighting matrix built from the most recent spectral estimate. It is shown to have significantly less leakage than the least squares periodogram through its use of an adaptive filter. The Bayesian Information Criterion is also discussed as a way to test the significance of peaks in the estimated spectrum.
This document summarizes the DBSCAN clustering algorithm. DBSCAN finds clusters based on density, requiring only two parameters: Eps, which defines the neighborhood distance, and MinPts, the minimum number of points required to form a cluster. It can discover clusters of arbitrary shape. The algorithm works by expanding clusters from core points, which have at least MinPts points within their Eps-neighborhood. Points that are not part of any cluster are classified as noise. Applications include spatial data analysis, image segmentation, and automatic border detection in medical images.
On the approximation of the sum of lognormals by a log skew normal distributionIJCNCJournal
Several methods have been proposed to approximate the sum of lognormal RVs. However the accuracy of each method relies highly on the region of the resulting distribution being examined, and the individual lognormal parameters, i.e., mean and variance. There is no such method which can provide the needed accuracy for all cases. This paper propose a universal yet very simple approximation method for the sum of Lognormals based on log skew normal approximation. The main contribution on this work is to propose an analytical method for log skew normal parameters estimation. The proposed method provides highly accurate approximation to the sum of lognormal distributions over the whole range of dB spreads for any correlation coefficient. Simulation results show that our method outperforms all previously proposed methods and provides an accuracy within 0.01 dB for all cases.
This document summarizes a paper that analyzes compressive sampling (CS) for compressing and reconstructing electrocardiogram (ECG) signals using l1 minimization algorithms. It proposes remodeling the linear program problem into a second order cone program to improve performance metrics like percent root-mean-squared difference, compression ratio, and signal-to-noise ratio when reconstructing ECG signals from the PhysioNet database. The paper provides an overview of CS theory and l1 minimization algorithms, describes the proposed approach of using quadratic constraints, and defines performance metrics for analyzing reconstructed ECG signals.
1. The document proposes a new image denoising method called NormalShrink that uses wavelet thresholding with an adaptive threshold estimated based on the subband characteristics of the noisy image.
2. Experimental results on test images like Lena, Barbara and Goldhill show that NormalShrink outperforms other methods like SureShrink, BayesShrink and Wiener filtering in terms of PSNR for most noise levels, remaining within 4% of the best possible OracleShrink method.
3. NormalShrink is also computationally more efficient than BayesShrink, removing noise significantly while preserving important image features better than compared methods.
Clustering Using Shared Reference Points Algorithm Based On a Sound Data ModelWaqas Tariq
A novel clustering algorithm CSHARP is presented for the purpose of finding clusters of arbitrary shapes and arbitrary densities in high dimensional feature spaces. It can be considered as a variation of the Shared Nearest Neighbor algorithm (SNN), in which each sample data point votes for the points in its k-nearest neighborhood. Sets of points sharing a common mutual nearest neighbor are considered as dense regions/ blocks. These blocks are the seeds from which clusters may grow up. Therefore, CSHARP is not a point-to-point clustering algorithm. Rather, it is a block-to-block clustering technique. Much of its advantages come from these facts: Noise points and outliers correspond to blocks of small sizes, and homogeneous blocks highly overlap. This technique is not prone to merge clusters of different densities or different homogeneity. The algorithm has been applied to a variety of low and high dimensional data sets with superior results over existing techniques such as DBScan, K-means, Chameleon, Mitosis and Spectral Clustering. The quality of its results as well as its time complexity, rank it at the front of these techniques.
Sensing Method for Two-Target Detection in Time-Constrained Vector Poisson Ch...sipij
It is an experimental design problem in which there are two Poisson sources with two possible and known rates, and one counter. Through a switch, the counter can observe the sources individually or the counts can be combined so that the counter observes the sum of the two. The sensor scheduling problem is to determine an optimal proportion of the available time to be allocated toward individual and joint sensing, under a total time
constraint. Two different metrics are used for optimization: mutual information between the sources and the observed counts, and probability of detection for the associated source detection problem. Our results, which are primarily computational, indicate similar but not identical results under the two cost functions.
Array diagnosis using compressed sensing in near fieldAlexander Decker
This document summarizes a technique for diagnosing faults in antenna arrays using compressed sensing on near-field measurement data. The technique aims to reduce measurement time by acquiring data from fewer measurement points compared to traditional methods like back-propagation. It does this by taking the difference between near-field measurements of a reference array without faults and the array under test to create a sparse "innovation vector". This vector is then reconstructed using an L1-norm regularization technique from compressed sensing. Numerical examples on a 289-element array show the technique can accurately detect up to 3 faulty elements from a small number of measurement points on the order of KlogN, where K is the number of faults.
Matrix Padding Method for Sparse Signal ReconstructionCSCJournals
This document summarizes a research paper that proposes a new method for sparse signal reconstruction using compressive sensing. The method involves padding the measurement matrix with additional rows during compression to solve the underdetermined system of equations. During reconstruction, an iterative least mean squares approximation is used. The performance of the proposed method is compared to other compressive sensing algorithms like l1-magic, OMP, and CoSaMP. Results showed the proposed method outperformed these other algorithms in terms of reconstruction accuracy in both noisy and noiseless environments.
This document summarizes the DBSCAN clustering algorithm. DBSCAN finds clusters based on density, requiring only two parameters: Eps, which defines the neighborhood distance, and MinPts, the minimum number of points required to form a cluster. It can discover clusters of arbitrary shape. The algorithm works by expanding clusters from core points, which have at least MinPts points within their Eps-neighborhood. Points that are not part of any cluster are classified as noise. Applications include spatial data analysis, image segmentation, and automatic border detection in medical images.
On the approximation of the sum of lognormals by a log skew normal distributionIJCNCJournal
Several methods have been proposed to approximate the sum of lognormal RVs. However the accuracy of each method relies highly on the region of the resulting distribution being examined, and the individual lognormal parameters, i.e., mean and variance. There is no such method which can provide the needed accuracy for all cases. This paper propose a universal yet very simple approximation method for the sum of Lognormals based on log skew normal approximation. The main contribution on this work is to propose an analytical method for log skew normal parameters estimation. The proposed method provides highly accurate approximation to the sum of lognormal distributions over the whole range of dB spreads for any correlation coefficient. Simulation results show that our method outperforms all previously proposed methods and provides an accuracy within 0.01 dB for all cases.
This document summarizes a paper that analyzes compressive sampling (CS) for compressing and reconstructing electrocardiogram (ECG) signals using l1 minimization algorithms. It proposes remodeling the linear program problem into a second order cone program to improve performance metrics like percent root-mean-squared difference, compression ratio, and signal-to-noise ratio when reconstructing ECG signals from the PhysioNet database. The paper provides an overview of CS theory and l1 minimization algorithms, describes the proposed approach of using quadratic constraints, and defines performance metrics for analyzing reconstructed ECG signals.
1. The document proposes a new image denoising method called NormalShrink that uses wavelet thresholding with an adaptive threshold estimated based on the subband characteristics of the noisy image.
2. Experimental results on test images like Lena, Barbara and Goldhill show that NormalShrink outperforms other methods like SureShrink, BayesShrink and Wiener filtering in terms of PSNR for most noise levels, remaining within 4% of the best possible OracleShrink method.
3. NormalShrink is also computationally more efficient than BayesShrink, removing noise significantly while preserving important image features better than compared methods.
Clustering Using Shared Reference Points Algorithm Based On a Sound Data ModelWaqas Tariq
A novel clustering algorithm CSHARP is presented for the purpose of finding clusters of arbitrary shapes and arbitrary densities in high dimensional feature spaces. It can be considered as a variation of the Shared Nearest Neighbor algorithm (SNN), in which each sample data point votes for the points in its k-nearest neighborhood. Sets of points sharing a common mutual nearest neighbor are considered as dense regions/ blocks. These blocks are the seeds from which clusters may grow up. Therefore, CSHARP is not a point-to-point clustering algorithm. Rather, it is a block-to-block clustering technique. Much of its advantages come from these facts: Noise points and outliers correspond to blocks of small sizes, and homogeneous blocks highly overlap. This technique is not prone to merge clusters of different densities or different homogeneity. The algorithm has been applied to a variety of low and high dimensional data sets with superior results over existing techniques such as DBScan, K-means, Chameleon, Mitosis and Spectral Clustering. The quality of its results as well as its time complexity, rank it at the front of these techniques.
Sensing Method for Two-Target Detection in Time-Constrained Vector Poisson Ch...sipij
It is an experimental design problem in which there are two Poisson sources with two possible and known rates, and one counter. Through a switch, the counter can observe the sources individually or the counts can be combined so that the counter observes the sum of the two. The sensor scheduling problem is to determine an optimal proportion of the available time to be allocated toward individual and joint sensing, under a total time
constraint. Two different metrics are used for optimization: mutual information between the sources and the observed counts, and probability of detection for the associated source detection problem. Our results, which are primarily computational, indicate similar but not identical results under the two cost functions.
Array diagnosis using compressed sensing in near fieldAlexander Decker
This document summarizes a technique for diagnosing faults in antenna arrays using compressed sensing on near-field measurement data. The technique aims to reduce measurement time by acquiring data from fewer measurement points compared to traditional methods like back-propagation. It does this by taking the difference between near-field measurements of a reference array without faults and the array under test to create a sparse "innovation vector". This vector is then reconstructed using an L1-norm regularization technique from compressed sensing. Numerical examples on a 289-element array show the technique can accurately detect up to 3 faulty elements from a small number of measurement points on the order of KlogN, where K is the number of faults.
Matrix Padding Method for Sparse Signal ReconstructionCSCJournals
This document summarizes a research paper that proposes a new method for sparse signal reconstruction using compressive sensing. The method involves padding the measurement matrix with additional rows during compression to solve the underdetermined system of equations. During reconstruction, an iterative least mean squares approximation is used. The performance of the proposed method is compared to other compressive sensing algorithms like l1-magic, OMP, and CoSaMP. Results showed the proposed method outperformed these other algorithms in terms of reconstruction accuracy in both noisy and noiseless environments.
Training and Inference for Deep Gaussian ProcessesKeyon Vafa
The document discusses training and inference for deep Gaussian processes (DGPs). It introduces the Deep Gaussian Process Sampling (DGPS) algorithm for learning DGPs. The DGPS algorithm relies on Monte Carlo sampling to circumvent the intractability of exact inference in DGPs. It is described as being more straightforward than existing DGP methods and able to more easily adapt to using arbitrary kernels. The document provides background on Gaussian processes and motivation for using deep Gaussian processes before describing the DGPS algorithm in more detail.
Time of arrival based localization in wireless sensor networks a non linear ...sipij
In this paper, we aim to obtain the location information of a sensor node deployed in a Wireless Sensor Network (WSN). Here, Time of Arrival based localization technique is considered. We calculate the position information of an unknown sensor node using the non- linear techniques. The performances of the techniques are compared with the Cramer Rao Lower bound (CRLB). Non-linear Least Squares and the Maximum Likelihood are the non-linear techniques that have been used to estimate the position of the unknown sensor node. Each of these non-linear techniques are iterative approaches, namely, Newton
Raphson estimate, Gauss Newton Estimate and the Steepest Descent estimate for comparison. Based on the
results of the simulation, the approaches have been compared. From the simulation study, Localization
based on Maximum Likelihood approach is having higher localization accuracy.
Classification of Iris Data using Kernel Radial Basis Probabilistic Neural Ne...Scientific Review
Radial Basis Probabilistic Neural Network (RBPNN) has a broader generalized capability that been successfully applied to multiple fields. In this paper, the Euclidean distance of each data point in RBPNN is extended by calculating its kernel-induced distance instead of the conventional sum-of squares distance. The kernel function is a generalization of the distance metric that measures the distance between two data points as the data points are mapped into a high dimensional space. During the comparing of the four constructed classification models with Kernel RBPNN, Radial Basis Function networks, RBPNN and Back-Propagation networks as proposed, results showed that, model classification on Iris Data with Kernel RBPNN display an outstanding performance in this regard.
Multi-polarization reconstruction from compact polarimetry based on modified ...yinjj07
The document describes an improved algorithm for reconstructing multi-polarization information from compact polarimetry (CP) measurements. It proposes modifying the traditional four-component scattering decomposition model by using a new volume scattering model. This allows the decomposed helix scattering component to be used to account for non-reflection symmetry in CP data. It then develops an average relationship between co-polarized and cross-polarized channels based on the scattering powers and mechanisms. Experimental data demonstrates the effectiveness of the proposed reconstruction method.
Clustered Compressive Sensingbased Image Denoising Using Bayesian Frameworkcsandit
This paper provides a compressive sensing (CS) method of denoising images using Bayesian
framework. Some images, for example like magnetic resonance images (MRI) are usually very
weak due to the presence of noise and due to the weak nature of the signal itself. So denoising
boosts the true signal strength. Under Bayesian framework, we have used two different priors:
sparsity and clusterdness in an image data as prior information to remove noise. Therefore, it is
named as clustered compressive sensing based denoising (CCSD). After developing the
Bayesian framework, we applied our method on synthetic data, Shepp-logan phantom and
sequences of fMRI images. The results show that applying the CCSD give better results than
using only the conventional compressive sensing (CS) methods in terms of Peak Signal to Noise
Ratio (PSNR) and Mean Square Error (MSE). In addition, we showed that this algorithm could
have some advantages over the state-of-the-art methods like Block-Matching and 3D
Filtering (BM3D).
IMAGE REGISTRATION USING ADVANCED TOPOLOGY PRESERVING RELAXATION LABELING csandit
This paper presents a relaxation labeling technique with newly defined compatibility measures
for solving a general non-rigid point matching problem. In the literature, there exists a point
matching method using relaxation labeling, however, the compatibility coefficients always take
a binary value zero or one depending on whether a point and a neighboring point have
corresponding points. Our approach generalizes this relaxation labeling approach. The
compatibility coefficients take n-discrete values which measures the correlation between edges.
We use log-polar diagram to compute correlations. Through simulations, we show that this
topology preserving relaxation method improves the matching performance significantly
compared to other state-of-the-art algorithms such as shape context, thin plate spline-robust
point matching, robust point matching by preserving local neighborhood structures and
coherent point drift.
The document summarizes the Birch clustering algorithm. It introduces the key concepts of Birch including clustering features (CF), which summarize information about clusters, and clustering feature trees (CFT), which are hierarchical data structures that store CFs. Birch uses a single scan to incrementally build a CFT, and then performs additional scans to improve clustering quality. It scales well to large databases due to the CF and CFT structures.
This document discusses various algorithms used for clustering data streams. It begins by introducing the problem of clustering streaming data and the common approach of using micro-clusters to summarize streaming data. It then reviews several prominent clustering algorithms like DBSCAN, DENCLUE, SNN, and CHAMELEON. The document focuses on the DBSTREAM algorithm, which explicitly captures density between micro-clusters using a shared density graph to improve reclustering. Experimental results show DBSTREAM's reclustering using shared density outperforms other reclustering strategies while using fewer micro-clusters.
Birch is an efficient data clustering algorithm designed for very large databases. It builds a Clustering Feature (CF) tree to cluster data points based on their distances. The CF tree allows clustering decisions to be made without scanning the entire dataset. Birch operates in four phases: 1) building an initial CF tree, 2) condensing the tree, 3) performing global clustering on leaf nodes, and 4) optional refinement of clusters. The algorithm aims to minimize runtime and data scans for clustering large databases.
Sums of lognormal random variables (RVs) occur in many important problems in wireless
communications especially in interferences calculation. Several methods have been proposed to
approximate the lognormal sum distribution. Most of them requires lengthy Monte Carlo
simulations, or advanced slowly converging numerical integrations for curve fitting and
parameters estimation. Recently, it has been shown that the log skew normal distribution can
offer a tight approximation to the lognormal sum distributed RVs. We propose a simple and
accurate method for fitting the log skew normal distribution to lognormal sum distribution. We
use moments and tails slope matching technique to find optimal log skew normal distribution
parameters. We compare our method with those in literature in terms of complexity and
accuracy. We conclude that our method has same accuracy than other methods but more
simple. To further validate our approach, we provide an example for outage probability
calculation in lognormal shadowing environment based on log skew normal approximation.
Principal component analysis and matrix factorizations for learning (part 1) ...zukun
This document discusses principal component analysis (PCA) and matrix factorizations for learning. It provides an overview of PCA and singular value decomposition (SVD), their history and applications. PCA and SVD are widely used techniques for dimensionality reduction and data transformation. The document also discusses how PCA relates to other methods like spectral clustering and correspondence analysis.
Satellite image Compression reduces redundancy in data representation in order to achieve saving in the
cost of storage and transmission image compression compensates for the limited on-board resources, in
terms of mass memory and downlink bandwidth and thus it provides a solution to the (bandwidth vs. data
volume) dilemma of modern spacecraft Thus compression is very important feature in payload image
processing units of many satellites, In this paper, an improvement of the quantization step of the input
vectors has been proposed. The k-nearest neighbour (KNN) algorithm was used on each axis. The three
classifications considered as three independent sources of information, are combined in the framework of
the evidence theory the best code vector is then selected. After Huffman schemes is applied for encoding
and decoding.
The document describes a seminar report on using a divide and conquer algorithm to find the closest pair of points from a set of points in two dimensions. It discusses implementing both a brute force algorithm that compares all pairs, taking O(n^2) time, and a divide and conquer algorithm that recursively divides the point set into halves and finds the closest pairs in each subset and near the dividing line, taking O(nlogn) time. It provides pseudocode for both algorithms and discusses the history and improvements made to the closest pair problem over time, reducing the number of distance computations needed.
- Compressive sensing (CS) theory asserts that one can recover certain signals and images from far fewer samples or measurements than traditional methods use
- CS relies on two principle :
sparsity: which pertains to the signal of interest
In coherence : which pertains to the sensing modality
Trust Region Algorithm - Bachelor DissertationChristian Adom
The document summarizes the trust region algorithm for solving unconstrained optimization problems. It begins by introducing trust region methods and comparing them to line search algorithms. The basic trust region algorithm is then outlined, which approximates the objective function within a region using a quadratic model at each iteration. It discusses solving the trust region subproblem to find a step that minimizes the model within the trust region. Finally, it introduces the Cauchy point and double dogleg step as methods for solving the subproblem.
A PSO-Based Subtractive Data Clustering AlgorithmIJORCS
There is a tremendous proliferation in the amount of information available on the largest shared information source, the World Wide Web. Fast and high-quality clustering algorithms play an important role in helping users to effectively navigate, summarize, and organize the information. Recent studies have shown that partitional clustering algorithms such as the k-means algorithm are the most popular algorithms for clustering large datasets. The major problem with partitional clustering algorithms is that they are sensitive to the selection of the initial partitions and are prone to premature converge to local optima. Subtractive clustering is a fast, one-pass algorithm for estimating the number of clusters and cluster centers for any given set of data. The cluster estimates can be used to initialize iterative optimization-based clustering methods and model identification methods. In this paper, we present a hybrid Particle Swarm Optimization, Subtractive + (PSO) clustering algorithm that performs fast clustering. For comparison purpose, we applied the Subtractive + (PSO) clustering algorithm, PSO, and the Subtractive clustering algorithms on three different datasets. The results illustrate that the Subtractive + (PSO) clustering algorithm can generate the most compact clustering results as compared to other algorithms.
Birch is an efficient data clustering algorithm for large datasets. It builds a CF-tree from one pass over the data, then performs clustering in memory. This allows it to cluster large datasets with fewer data scans than other algorithms, such as k-means and CLARANS, which require multiple full scans. Experimental results show Birch completes clustering significantly faster than these other algorithms while achieving comparable or better clustering quality.
Bipin Jha has over 10 years of experience in IT support and networking. He received a B.A. from Lalit Narayan Mithila University in 2008 and earned his JCHNP certification in 2011. He currently works as an IT resources school coordinator at Extramarks Education, where his responsibilities include maintaining Ubuntu systems, creating and managing databases, writing scripts, troubleshooting hardware and software issues, and providing on-site support. Previously, he worked as a networks engineer and desktop engineer. He has expertise in Windows, Ubuntu, networking, routers, and basic hardware repair.
Zebrafish embryos were exposed to varying concentrations of fertilizer runoff to examine the effects on growth and development. There was no effect on mortality but significant thresholds were observed for body length and width starting at 4000x the EPA limit. While concentrations exceeded safe levels, replicating local values from reclaimed water could provide more relevant results. Future research with wider ranges and continuous exposure may show greater deformities, while internal analyses could find additional developmental issues.
Srinivas Medikonda is a Principal Solution Architect with nearly 20 years of experience in project/program management, process reengineering, business analysis, client relationship management, software development costing and budgeting, and requirement gathering/analysis. He has significant experience working in the banking and finance, healthcare, telecom, high tech, and manufacturing industries. He is proficient in providing CRM consulting services and is well-versed in various project management methodologies. He also has sound knowledge of key CRM concepts and is an effective communicator with strong leadership, problem-solving, and people management skills.
Training and Inference for Deep Gaussian ProcessesKeyon Vafa
The document discusses training and inference for deep Gaussian processes (DGPs). It introduces the Deep Gaussian Process Sampling (DGPS) algorithm for learning DGPs. The DGPS algorithm relies on Monte Carlo sampling to circumvent the intractability of exact inference in DGPs. It is described as being more straightforward than existing DGP methods and able to more easily adapt to using arbitrary kernels. The document provides background on Gaussian processes and motivation for using deep Gaussian processes before describing the DGPS algorithm in more detail.
Time of arrival based localization in wireless sensor networks a non linear ...sipij
In this paper, we aim to obtain the location information of a sensor node deployed in a Wireless Sensor Network (WSN). Here, Time of Arrival based localization technique is considered. We calculate the position information of an unknown sensor node using the non- linear techniques. The performances of the techniques are compared with the Cramer Rao Lower bound (CRLB). Non-linear Least Squares and the Maximum Likelihood are the non-linear techniques that have been used to estimate the position of the unknown sensor node. Each of these non-linear techniques are iterative approaches, namely, Newton
Raphson estimate, Gauss Newton Estimate and the Steepest Descent estimate for comparison. Based on the
results of the simulation, the approaches have been compared. From the simulation study, Localization
based on Maximum Likelihood approach is having higher localization accuracy.
Classification of Iris Data using Kernel Radial Basis Probabilistic Neural Ne...Scientific Review
Radial Basis Probabilistic Neural Network (RBPNN) has a broader generalized capability that been successfully applied to multiple fields. In this paper, the Euclidean distance of each data point in RBPNN is extended by calculating its kernel-induced distance instead of the conventional sum-of squares distance. The kernel function is a generalization of the distance metric that measures the distance between two data points as the data points are mapped into a high dimensional space. During the comparing of the four constructed classification models with Kernel RBPNN, Radial Basis Function networks, RBPNN and Back-Propagation networks as proposed, results showed that, model classification on Iris Data with Kernel RBPNN display an outstanding performance in this regard.
Multi-polarization reconstruction from compact polarimetry based on modified ...yinjj07
The document describes an improved algorithm for reconstructing multi-polarization information from compact polarimetry (CP) measurements. It proposes modifying the traditional four-component scattering decomposition model by using a new volume scattering model. This allows the decomposed helix scattering component to be used to account for non-reflection symmetry in CP data. It then develops an average relationship between co-polarized and cross-polarized channels based on the scattering powers and mechanisms. Experimental data demonstrates the effectiveness of the proposed reconstruction method.
Clustered Compressive Sensingbased Image Denoising Using Bayesian Frameworkcsandit
This paper provides a compressive sensing (CS) method of denoising images using Bayesian
framework. Some images, for example like magnetic resonance images (MRI) are usually very
weak due to the presence of noise and due to the weak nature of the signal itself. So denoising
boosts the true signal strength. Under Bayesian framework, we have used two different priors:
sparsity and clusterdness in an image data as prior information to remove noise. Therefore, it is
named as clustered compressive sensing based denoising (CCSD). After developing the
Bayesian framework, we applied our method on synthetic data, Shepp-logan phantom and
sequences of fMRI images. The results show that applying the CCSD give better results than
using only the conventional compressive sensing (CS) methods in terms of Peak Signal to Noise
Ratio (PSNR) and Mean Square Error (MSE). In addition, we showed that this algorithm could
have some advantages over the state-of-the-art methods like Block-Matching and 3D
Filtering (BM3D).
IMAGE REGISTRATION USING ADVANCED TOPOLOGY PRESERVING RELAXATION LABELING csandit
This paper presents a relaxation labeling technique with newly defined compatibility measures
for solving a general non-rigid point matching problem. In the literature, there exists a point
matching method using relaxation labeling, however, the compatibility coefficients always take
a binary value zero or one depending on whether a point and a neighboring point have
corresponding points. Our approach generalizes this relaxation labeling approach. The
compatibility coefficients take n-discrete values which measures the correlation between edges.
We use log-polar diagram to compute correlations. Through simulations, we show that this
topology preserving relaxation method improves the matching performance significantly
compared to other state-of-the-art algorithms such as shape context, thin plate spline-robust
point matching, robust point matching by preserving local neighborhood structures and
coherent point drift.
The document summarizes the Birch clustering algorithm. It introduces the key concepts of Birch including clustering features (CF), which summarize information about clusters, and clustering feature trees (CFT), which are hierarchical data structures that store CFs. Birch uses a single scan to incrementally build a CFT, and then performs additional scans to improve clustering quality. It scales well to large databases due to the CF and CFT structures.
This document discusses various algorithms used for clustering data streams. It begins by introducing the problem of clustering streaming data and the common approach of using micro-clusters to summarize streaming data. It then reviews several prominent clustering algorithms like DBSCAN, DENCLUE, SNN, and CHAMELEON. The document focuses on the DBSTREAM algorithm, which explicitly captures density between micro-clusters using a shared density graph to improve reclustering. Experimental results show DBSTREAM's reclustering using shared density outperforms other reclustering strategies while using fewer micro-clusters.
Birch is an efficient data clustering algorithm designed for very large databases. It builds a Clustering Feature (CF) tree to cluster data points based on their distances. The CF tree allows clustering decisions to be made without scanning the entire dataset. Birch operates in four phases: 1) building an initial CF tree, 2) condensing the tree, 3) performing global clustering on leaf nodes, and 4) optional refinement of clusters. The algorithm aims to minimize runtime and data scans for clustering large databases.
Sums of lognormal random variables (RVs) occur in many important problems in wireless
communications especially in interferences calculation. Several methods have been proposed to
approximate the lognormal sum distribution. Most of them requires lengthy Monte Carlo
simulations, or advanced slowly converging numerical integrations for curve fitting and
parameters estimation. Recently, it has been shown that the log skew normal distribution can
offer a tight approximation to the lognormal sum distributed RVs. We propose a simple and
accurate method for fitting the log skew normal distribution to lognormal sum distribution. We
use moments and tails slope matching technique to find optimal log skew normal distribution
parameters. We compare our method with those in literature in terms of complexity and
accuracy. We conclude that our method has same accuracy than other methods but more
simple. To further validate our approach, we provide an example for outage probability
calculation in lognormal shadowing environment based on log skew normal approximation.
Principal component analysis and matrix factorizations for learning (part 1) ...zukun
This document discusses principal component analysis (PCA) and matrix factorizations for learning. It provides an overview of PCA and singular value decomposition (SVD), their history and applications. PCA and SVD are widely used techniques for dimensionality reduction and data transformation. The document also discusses how PCA relates to other methods like spectral clustering and correspondence analysis.
Satellite image Compression reduces redundancy in data representation in order to achieve saving in the
cost of storage and transmission image compression compensates for the limited on-board resources, in
terms of mass memory and downlink bandwidth and thus it provides a solution to the (bandwidth vs. data
volume) dilemma of modern spacecraft Thus compression is very important feature in payload image
processing units of many satellites, In this paper, an improvement of the quantization step of the input
vectors has been proposed. The k-nearest neighbour (KNN) algorithm was used on each axis. The three
classifications considered as three independent sources of information, are combined in the framework of
the evidence theory the best code vector is then selected. After Huffman schemes is applied for encoding
and decoding.
The document describes a seminar report on using a divide and conquer algorithm to find the closest pair of points from a set of points in two dimensions. It discusses implementing both a brute force algorithm that compares all pairs, taking O(n^2) time, and a divide and conquer algorithm that recursively divides the point set into halves and finds the closest pairs in each subset and near the dividing line, taking O(nlogn) time. It provides pseudocode for both algorithms and discusses the history and improvements made to the closest pair problem over time, reducing the number of distance computations needed.
- Compressive sensing (CS) theory asserts that one can recover certain signals and images from far fewer samples or measurements than traditional methods use
- CS relies on two principle :
sparsity: which pertains to the signal of interest
In coherence : which pertains to the sensing modality
Trust Region Algorithm - Bachelor DissertationChristian Adom
The document summarizes the trust region algorithm for solving unconstrained optimization problems. It begins by introducing trust region methods and comparing them to line search algorithms. The basic trust region algorithm is then outlined, which approximates the objective function within a region using a quadratic model at each iteration. It discusses solving the trust region subproblem to find a step that minimizes the model within the trust region. Finally, it introduces the Cauchy point and double dogleg step as methods for solving the subproblem.
A PSO-Based Subtractive Data Clustering AlgorithmIJORCS
There is a tremendous proliferation in the amount of information available on the largest shared information source, the World Wide Web. Fast and high-quality clustering algorithms play an important role in helping users to effectively navigate, summarize, and organize the information. Recent studies have shown that partitional clustering algorithms such as the k-means algorithm are the most popular algorithms for clustering large datasets. The major problem with partitional clustering algorithms is that they are sensitive to the selection of the initial partitions and are prone to premature converge to local optima. Subtractive clustering is a fast, one-pass algorithm for estimating the number of clusters and cluster centers for any given set of data. The cluster estimates can be used to initialize iterative optimization-based clustering methods and model identification methods. In this paper, we present a hybrid Particle Swarm Optimization, Subtractive + (PSO) clustering algorithm that performs fast clustering. For comparison purpose, we applied the Subtractive + (PSO) clustering algorithm, PSO, and the Subtractive clustering algorithms on three different datasets. The results illustrate that the Subtractive + (PSO) clustering algorithm can generate the most compact clustering results as compared to other algorithms.
Birch is an efficient data clustering algorithm for large datasets. It builds a CF-tree from one pass over the data, then performs clustering in memory. This allows it to cluster large datasets with fewer data scans than other algorithms, such as k-means and CLARANS, which require multiple full scans. Experimental results show Birch completes clustering significantly faster than these other algorithms while achieving comparable or better clustering quality.
Bipin Jha has over 10 years of experience in IT support and networking. He received a B.A. from Lalit Narayan Mithila University in 2008 and earned his JCHNP certification in 2011. He currently works as an IT resources school coordinator at Extramarks Education, where his responsibilities include maintaining Ubuntu systems, creating and managing databases, writing scripts, troubleshooting hardware and software issues, and providing on-site support. Previously, he worked as a networks engineer and desktop engineer. He has expertise in Windows, Ubuntu, networking, routers, and basic hardware repair.
Zebrafish embryos were exposed to varying concentrations of fertilizer runoff to examine the effects on growth and development. There was no effect on mortality but significant thresholds were observed for body length and width starting at 4000x the EPA limit. While concentrations exceeded safe levels, replicating local values from reclaimed water could provide more relevant results. Future research with wider ranges and continuous exposure may show greater deformities, while internal analyses could find additional developmental issues.
Srinivas Medikonda is a Principal Solution Architect with nearly 20 years of experience in project/program management, process reengineering, business analysis, client relationship management, software development costing and budgeting, and requirement gathering/analysis. He has significant experience working in the banking and finance, healthcare, telecom, high tech, and manufacturing industries. He is proficient in providing CRM consulting services and is well-versed in various project management methodologies. He also has sound knowledge of key CRM concepts and is an effective communicator with strong leadership, problem-solving, and people management skills.
Strengthening leadership and building new teams, pop up uni, 1pm, 2 september...NHS England
Expo is the most significant annual health and social care event in the calendar, uniting more NHS and care leaders, commissioners, clinicians, voluntary sector partners, innovators and media than any other health and care event.
Expo 15 returned to Manchester and was hosted once again by NHS England. Around 5000 people a day from health and care, the voluntary sector, local government, and industry joined together at Manchester Central Convention Centre for two packed days of speakers, workshops, exhibitions and professional development.
This year, Expo was more relevant and engaging than ever before, happening within the first 100 days of the new Government, and almost 12 months after the publication of the NHS Five Year Forward View. It was also a great opportunity to check on and learn from the progress of Greater Manchester as the area prepares to take over a £6 billion devolved health and social care budget, pledging to integrate hospital, community, primary and social care and vastly improve health and well-being.
More information is available online: www.expo.nhs.uk
Social media can effectively impact business growth. It provides opportunities for advertising, marketing, promotions, recruitment, professional blogging, user forums, and research and development. Virgin America generated thousands of tweets and press coverage by identifying social media influencers to give free flights to, delivering more immediate impact than traditional advertising. American Express created a forum for small businesses on Facebook that generated interest and increased fan base through a strategic partnership. Scott Monty advises Ford on social media integration across the company. Local companies like Dialog, SriLankan Airlines, Anything.lk, and Munchkin have also attempted to benefit from social media.
The document discusses conventions and forms used in various media products such as music videos. It provides examples of conventions from popular music videos and albums and how the student has used or been inspired by these conventions in their own media products. Specifically, it discusses conventions around cinematic locations, lighting, pacing of cuts, dance, photography style, color schemes, and placement of text in album artwork. The overall purpose is to evaluate how the student's media products do or do not follow conventions from real media products.
Gassim Al-Gassim is a senior executive with over 30 years of experience in oil and gas, petrochemicals, and railway industries. He has extensive experience leading large-scale construction projects in Saudi Arabia, including overseeing $20 billion worth of projects for the Saudi Railway Company. Al-Gassim is skilled in project management, operations, engineering, strategic planning, and relationship building. He holds a Bachelor's degree in Electrical Engineering and is fluent in Arabic and English.
The student reflected on her mock interview experience for an ED 411 class. Before the interview, she was worried that the interviewer might ask about unknown topics, that she would fidget without realizing it, and whether her answers would be appropriate. After the interview, she realized the interviewer was understanding when asked about unfamiliar topics, she caught herself fidgeting but was able to stop, and felt the interviewer provided helpful feedback and genuinely wanted to support her development.
This document provides an overview of VPN penetration testing. It begins with an introduction of the presenter and agenda. It then defines what a VPN is and why they are used. The main types of VPN protocols covered are PPTP, IPSec, SSL, and hybrid VPNs. Details are given about each protocol type. The document also discusses VPN traffic, applications, and potential issues like weak encryption, brute force attacks, lack of data integrity checks, and port failures leading to data leaks. Contact information is provided at the end.
The document summarizes a mock interview conducted using a child interview model. It provides reflections on the strengths and weaknesses of the interviewer's approach in the introduction, building rapport, gathering information, and closing sections of the interview. Some strengths included advising the child of recording, establishing ground rules, and obtaining key details of the abuse. Weaknesses consisted of a lack of examples for correction, not thoroughly documenting the introduction, and not fully probing for details or neutralizing the conversation at closure.
Performance and Feature comparison of different frameworks of GoLang v/s Non-GoLang Testing Frameworks.
This presentation have been presented in GopherConIndia-16
Ms. Le Thi Dung graduated from the Foreign Trade University in 2012 with a Bachelor's degree in External Economy. She has been working as a Sale Executive at An Phat Plastic and Green Environment JSC since 2012, where she is responsible for finding new customers, developing business with current customers, negotiating and signing sales contracts, and providing after sales service. During her time there, she has received a Certificate of Excellent Sale Executive, met her monthly sales targets, and increased sales volume threefold over three years. In her free time, she enjoys reading, traveling, listening to music, and shopping.
This paper provides a compressive sensing (CS) method of denoising images using Bayesian framework. Some images, for example like magnetic resonance images (MRI) are usually very
weak due to the presence of noise and due to the weak nature of the signal itself. So denoising
boosts the true signal strength. Under Bayesian framework, we have used two different priors:
sparsity and clusterdness in an image data as prior information to remove noise. Therefore, it is
named as clustered compressive sensing based denoising (CCSD). After developing the
Bayesian framework, we applied our method on synthetic data, Shepp-logan phantom and
sequences of fMRI images. The results show that applying the CCSD give better results than
using only the conventional compressive sensing (CS) methods in terms of Peak Signal to Noise
Ratio (PSNR) and Mean Square Error (MSE). In addition, we showed that this algorithm could
have some advantages over the state-of-the-art methods like Block-Matching and 3D
Filtering (BM3D).
Performance of Matching Algorithmsfor Signal Approximationiosrjce
The document summarizes and compares several algorithms for signal approximation and sparse signal recovery, including Equivalent Detection (ED), Non-negative Equivalent Detection, Orthogonal Matching Pursuit (OMP), and Stagewise Orthogonal Matching Pursuit (StOMP). It discusses how each algorithm works, including iteratively selecting atoms from a dictionary to build up a sparse representation of the signal. OMP selects one atom per iteration while StOMP selects all atoms above a threshold. The document also discusses computational complexities of the different algorithms.
This document discusses performance of matching algorithms for signal approximation. It begins by introducing matching pursuit algorithms like Orthogonal Matching Pursuit (OMP) and Stagewise Orthogonal Matching Pursuit (StOMP) which are greedy algorithms that approximate sparse signals. It then describes the Non-Negative Least Squares algorithm which solves non-negative least squares problems. Finally, it discusses Extranious Equivalent Detection (EED), a modification of OED that incorporates non-negativity of representations by using a non-negative optimization technique instead of orthogonal projection.
The document describes the implementation of a wideband spectrum sensing algorithm using a software-defined radio. It discusses using an energy detection based approach to sense the local frequency spectrum and determine which portions are unused. The algorithm is first tested via simulations in MATLAB using known signal parameters. It is then tested using real data collected from a Universal Software Radio Peripheral (USRP) to analyze the actual wireless spectrum.
Fixed Point Realization of Iterative LR-Aided Soft MIMO Decoding AlgorithmCSCJournals
Multiple-input multiple-output (MIMO) systems have been widely acclaimed in order to provide high data rates. Recently Lattice Reduction (LR) aided detectors have been proposed to achieve near Maximum Likelihood (ML) performance with low complexity. In this paper, we develop the fixed point design of an iterative soft decision based LR-aided K-best decoder, which reduces the complexity of existing sphere decoder. A simulation based word-length optimization is presented for physical implementation of the K-best decoder. Simulations show that the fixed point result of 16 bit precision can keep bit error rate (BER) degradation within 0.3 dB for 8×8 MIMO systems with different modulation schemes.
1) The document discusses power spectrum estimation methods for digital signal processing.
2) It describes five common non-parametric power spectrum estimation techniques: periodogram method, modified periodogram method, Bartlett's method, Welch's method, and Blackman-Tukey method.
3) Each method has different tradeoffs between frequency resolution, variance, and bias that make some techniques better for certain applications like feature extraction.
Evaluation of the Sensitivity of Seismic Inversion Algorithms to Different St...IJERA Editor
This document evaluates the sensitivity of seismic inversion algorithms to wavelets estimated using different statistical methods. It summarizes two wavelet estimation techniques - the Hilbert transform method and smoothing spectra method. It also describes two inversion methods - Narrow-band inversion and a Bayesian approach. Numerical experiments were conducted to analyze the performance of the wavelet estimation methods and sensitivity of the inversion algorithms to estimated wavelets. The smoothing spectra method produced better wavelet estimates. The Bayesian approach yielded superior inversion results and more robust impedance estimates compared to Narrow-band inversion in all tests.
International Journal of Mathematics and Statistics Invention (IJMSI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJMSI publishes research articles and reviews within the whole field Mathematics and Statistics, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Iterative Soft Decision Based Complex K-best MIMO DecoderCSCJournals
This paper presents an iterative soft decision based complex multiple input multiple output (MIMO) decoding algorithm, which reduces the complexity of Maximum Likelihood (ML) detector. We develop a novel iterative complex K-best decoder exploiting the techniques of lattice reduction for 8×8 MIMO. Besides list size, a new adjustable variable has been introduced in order to control the on-demand child expansion. Following this method, we obtain 6.9 to 8.0 dB improvement over real domain K-best decoder and 1.4 to 2.5 dB better performance compared to iterative conventional complex decoder for 4th iteration and 64-QAM modulation scheme. We also demonstrate the significance of new parameter on bit error rate. The proposed decoder not only increases the performance, but also reduces the computational complexity to a certain level.
Iterative Soft Decision Based Complex K-best MIMO DecoderCSCJournals
This paper presents an iterative soft decision based complex multiple input multiple output (MIMO) decoding algorithm, which reduces the complexity of Maximum Likelihood (ML) detector. We develop a novel iterative complex K-best decoder exploiting the techniques of lattice reduction for 8×8 MIMO. Besides list size, a new adjustable variable has been introduced in order to control the on-demand child expansion. Following this method, we obtain 6.9 to 8.0 dB improvement over real domain K-best decoder and 1.4 to 2.5 dB better performance compared to iterative conventional complex decoder for 4th iteration and 64-QAM modulation scheme. We also demonstrate the significance of new parameter on bit error rate. The proposed decoder not only increases the performance, but also reduces the computational complexity to a certain level.
Many algorithms have been developed to find sparse representation over redundant dictionaries or
transform. This paper presents a novel method on compressive sensing (CS)-based image compression
using sparse basis on CDF9/7 wavelet transform. The measurement matrix is applied to the three levels of
wavelet transform coefficients of the input image for compressive sampling. We have used three different
measurement matrix as Gaussian matrix, Bernoulli measurement matrix and random orthogonal matrix.
The orthogonal matching pursuit (OMP) and Basis Pursuit (BP) are applied to reconstruct each level of
wavelet transform separately. Experimental results demonstrate that the proposed method given better
quality of compressed image than existing methods in terms of proposed image quality evaluation indexes
and other objective (PSNR/UIQI/SSIM) measurements.
Exact network reconstruction from consensus signals and one eigen valueIJCNCJournal
The basic inverse problem in spectral graph theory consists in determining the graph given its eigenvalue
spectrum. In this paper, we are interested in a network of technological agents whose graph is unknown,
communicating by means of a consensus protocol. Recently, the use of artificial noise added to consensus
signals has been proposed to reconstruct the unknown graph, although errors are possible. On the other
hand, some methodologies have been devised to estimate the eigenvalue spectrum, but noise could interfere
with the elaborations. We combine these two techniques in order to simplify calculations and avoid
topological reconstruction errors, using only one eigenvalue. Moreover, we use an high frequency noise to
reconstruct the network, thus it is easy to filter the control signals after the graph identification. Numerical
simulations of several topologies show an exact and robust reconstruction of the graphs.
Face recognition using laplacianfaces (synopsis)Mumbai Academisc
The document proposes a Laplacianface approach for face recognition. It uses locality preserving projections (LPP) to map face images into a subspace for analysis, preserving local information better than PCA or LDA. The Laplacianfaces are optimal linear approximations of the Laplace Beltrami operator on the face manifold. This helps eliminate unwanted variations from lighting, expression, and pose. Experiments show the Laplacianface approach provides better representation and lower error rates than Eigenface and Fisherface methods.
This document summarizes a paper that introduces a mathematical approach to quantify errors resulting from reduced order modeling (ROM) techniques. ROM aims to reduce the dimensionality of input and output data for computationally intensive simulations like uncertainty quantification. The paper presents a method to calculate probabilistic error bounds for ROM that account for discarded model components. Numerical experiments on a pin cell reactor physics model demonstrate the ability to determine error bounds and validate them against actual errors with high probability. The error bounds approach could enable ROM techniques to self-adapt the level of reduction needed to ensure errors remain below thresholds for reliability.
Probabilistic Error Bounds for Reduced Order Modeling M&C2015Mohammad
This paper introduces a mathematical approach to quantify errors resulting from reduced order modeling (ROM) techniques. ROM works by discarding model components deemed to have negligible impact, but this introduces reduction errors. The paper derives an expression to calculate probabilistic error bounds for the discarded components. Numerical experiments on a pin cell model demonstrate the approach, showing the error bounds capture the actual errors with high probability, even when the ROM is applied under different physics conditions. The error bounding technique allows ROM algorithms to self-adapt and ensure reduction errors remain below user-defined tolerances.
COMPARISON OF VOLUME AND DISTANCE CONSTRAINT ON HYPERSPECTRAL UNMIXINGcsandit
The document compares two algorithms for hyperspectral image unmixing - one based on minimum volume constraint and one based on sum of squared distances constraint. It analyzes the performance of the two algorithms under different conditions like flatness of the endmember simplex, effects of initialization, and robustness to noise. The analysis shows that the sum of squared distances constraint performs better than the volume constraint for non-regular simplex shapes and is more robust to random initialization and noise. The comparison provides guidance on which constraint is more suitable for specific hyperspectral unmixing tasks.
Using Subspace Pursuit Algorithm to Improve Performance of the Distributed Co...Polytechnique Montreal
This paper applies a compressed algorithm to improve the spectrum sensing performance of cognitive radio technology.
At the fusion center, the recovery error in the analog to information converter (AIC) when reconstructing the
transmit signal from the received time-discrete signal causes degradation of the detection performance. Therefore, we
propose a subspace pursuit (SP) algorithm to reduce the recovery error and thereby enhance the detection performance.
In this study, we employ a wide-band, low SNR, distributed compressed sensing regime to analyze and evaluate the
proposed approach. Simulations are provided to demonstrate the performance of the proposed algorithm.
This document proposes a holistic approach to reconstruct data in ocean sensor networks using compression sensing. It involves two key aspects:
1) A node reordering scheme is developed to improve the sparsity of signals in the discrete cosine transform or Fourier transform domain, reducing the number of measurements needed for accurate reconstruction.
2) An improved sparse adaptive tracking algorithm is adopted to estimate the sparse degree and then reconstruct the signal in a step-by-step manner, gradually converging on an accurate reconstruction even with unknown sparsity.
Simulation results show the proposed method can effectively reduce signal sparsity and accurately reconstruct signals, especially in cases of unknown sparsity.
The document discusses low-rank matrix optimization problems and heuristics for solving rank minimization problems. It covers the following key points in 3 sentences:
The document outlines motivation for extracting low-dimensional structures from high-dimensional data using rank minimization. It then discusses several heuristics for approximating the non-convex rank minimization problem, including replacing the rank with the nuclear norm, using the log-det heuristic as a smooth surrogate, matrix factorization methods, and iteratively solving a sequence of rank-constrained convex problems. Applications mentioned include the Netflix Prize and video intrusion detection.
Implementation of a Localization System for Sensor Networks-berkleyFarhad Gholami
This dissertation discusses the implementation of a localization system for sensor networks. It addresses two main tasks: establishing relationships to reference points (e.g. distance measurements) and using those relationships and reference point positions to calculate sensor positions algorithmically.
The dissertation first presents various centralized and distributed localization algorithms from existing research. It then focuses on implementing a distributed, least-squares-based localization algorithm and designing an ultra-low power hardware architecture for it. Measurement errors due to fixed-point arithmetic are also analyzed.
The second part of the dissertation proposes, designs and prototypes an RF signal-based time-of-flight ranging system. The prototype achieves a measurement error within -0.5m to 2m at 100
Zero-padding a signal involves appending artificial zeros to increase the length of the signal. This increases the frequency resolution of the discrete Fourier transform (DFT) by changing the implicit periodicity assumption made about the signal. Specifically, zero-padding moves the DFT closer to approximating the true discrete-time Fourier transform (DTFT) by changing the assumption from periodicity to assuming the signal is zero outside the observed range. While zero-padding does not provide new information, it can help reveal features of a signal by modifying the implicit assumptions of the DFT.
The document discusses weighted nuclear norm minimization and its applications to image denoising. It provides background on key concepts from linear algebra and optimization theory needed to understand the denoising problem, such as convex optimization, affine transformations, singular value decomposition, and eigendecomposition. The objective of denoising is to extract the low-rank original image from a noisy high-dimensional image, modeled as the sum of the original image and white noise.
This document describes an RSSI (received signal strength indicator) based localization algorithm for wireless sensor networks. It discusses using RSSI values measured from reference nodes to estimate distances and perform trilateration to locate a target sensor node. The algorithm design includes RSSI to distance conversion using a path loss model, trilateration implementation using circle intersections, and simplifying computations for resource-limited sensor node processors through techniques like Taylor series approximations of exponential functions. Pseudocode is provided for RSSI to distance conversion and trilateration calculations.
1. The wavelet transform can be used to detect singularities or discontinuities in signals by identifying large wavelet coefficients around points of abrupt change across multiple scales.
2. The wavelet transform modulus maxima (WTMM) method uses successive derivative wavelets to identify singularities by removing lower order polynomial terms from the signal at each scale.
3. Local maxima of the continuous wavelet transform are related to singularities in the signal, and their behavior across scales can be used to characterize the point-wise regularity of the signal, detect noise, and reconstruct the signal from its singularities.
This project report discusses applying a Sobel edge detection algorithm and median filtering to colour JPEG images. It introduces the Sobel edge detection algorithm, which uses two 3x3 kernels to approximate horizontal and vertical derivatives in an image. It also discusses median filtering to reduce impulsive noise without blurring edges. The report outlines using the libjpeg library to read, write and process JPEG images in C code. It includes the source code for implementing Sobel edge detection and median filtering on JPEG images.
1. Signal detection Theory Final Project Report:
Spectral Analysis of Nonuniformly Sampled Data:
A New Approach Versus the Periodogram
Petre Stoica, Fellow, IEEE, Jian Li, Fellow, IEEE, and Hao He, Student Member, IEEE
Farhad Gholami
1
2. Abstract:
Power Spectral Density (PSD) for a random signal y(t) is defined as expected value (average)of power
of y(t) and is important when analyzing random processes. In practice we need estimate PSD from a
limited number of samples which are noisy using periodogram.
We will see why periodograms generally suffer from two drawbacks which are:
1)Poor resolution due to local leakage through the main lobe of the spectral window.
2)Significant global leakage through the side lobes
First we review PSD s and will explain periodograms and why least-squares periodogram (LSP) is
preferable to the Fourier periodogram from a data-fitting point of view and also it is not
computationally very complicated.
To solve these issues new method proposed in this paper, which can be interpreted as an iteratively
weighted LSP that makes use of a data-dependent weighting matrix built from the most recent spectral
estimate.
Because this method was derived for the case of real data (which is more complicated to deal with in
spectral analysis than the complex data), it is iterative and it makes use of an adaptive ( data-dependent)
weighting, we referred to it as the real-valued iterative adaptive approach (RIAA).
Power Spectral density(PSD) and problem definition:
Considering formal definition of PSD :
We assume samples {x1, . ..,xN} need to be very large which creates below practical problems:
1)We are only given one sequence so can do expected values.
2)We have limited number of samples so can not let N becomes close to infinite.
So we want a method to determine estimate of PSD using a finite number of samples.
Applications of Spectral Estimation:
Manyof systems dealing with random processes for practical reasons need to estimate PSD .
Speech: Formant estimation (for speech recognition) , Speech coding or compression
Radar and Sonar: Source localization with sensor arrays, Synthetic aperture radar imaging and feature
extraction
Electromagnetics: Resonant frequencies of a cavity
2
3. Communications:Code-timing estimation in DS-CDMA systems
Spectral Density Estimation Techniques:
Parametric Methods: Assume underlying stationary stochastic process has a certain structure which
can be described using a small number of parameters (for example, using moving average model).
Task: Estimate the parameters of the model that describes the random process.
Nonparametric Methods: Estimate spectrum of the process without assuming that the process has
any particular structure.( for example Periodogram , Least-squares spectral periodogram, based on
least squares fitting to known frequencies)
Trade-Offs: (Robustness vs. Accuracy): Parametric Methods may offer better estimates if data
closely agrees with assumed model. Otherwise, Nonparametric Methods may be better
Periodogram Definition (derived from PSD definition):
We define periodogram for provided samples are {y1, . ..,yN} to estimate power spectral density as:
Which is derived from PSD definition by omitting expected value and limiting number of samples to
N.
Periodogram Variance:
Periodoram estimation of PSD is often noisy, one way of noise reduction is averaging as descibed
below:
3
4. Windowing effect on periodogram:
The Periodogram can be interpreted by DFT multiplied by a window in time domain which is a
convolution with sinc likes in frequency domain:
For a rectangular window, in frequency domain we will have:
This window effect creates two main problem for our estimation as we describe them as:
1)Local leakage.
2)Gobal leakage.
4
5. Local leakage is due to the width of the main beam of the spectral window, and it is what limits the
resolution capability of the periodogram.
Global leakage is due to the side-lobes of the spectral window, and is what causes spurious peaks to
occur (which leads to “false alarms”) and small peaks to drown in the leakage from large peaks (which
leads to “misses”).
The Modified Periodogram:
Considering that rectangular window among all windows with the same width N , has the narrowest
main lobe (a good quality here) but also has big side lobes.
By replacing WR[n] by a different window we can have lower side lobes but wider main lobes.
Examples are Bartlett and Hamming windows. To reduce variance we can perform local averaging of
the periodogram which in turn can increase bias:
5
6. Least Squares (LS) optimization:
Modeling periodic behavior in a noisy time series suggest to use a method which can eliminate noise
effect from data. Least-squares spectral analysis (LSSA) estimating a frequency spectrum, based on a
least squares fit of sinusoids to data samples which is more immune to noisy data.
We can express the Periodogram as the solution of the Least Squares (LS) optimization fitting problem
Px will be obtained by solving the Least Square problem using sudeo inverse matrix:
Fourier Periodogram:
The Fourier transform periodogram (FP) associated with N samples y(tn) is given by:
It can be verified that PF can be obtained from the solution to the following least-squares (LS) data
fitting problem:
6
7. Because samples are real values, the Least Square criterion above can be rewritten as:
We can minimize the first terms (sinusoidal data assumption)but second term has no data fitting
interpretation and only acts as an additive data-independent.
Least Square Periodogram:
As we saw in the case of real (sinusoidal) data, considered in this paper, the use of FP is not completely
suitable, and that a more satisfactory spectral estimate should be obtained by solving the following LS
fitting problem:
By omitting the dependence of and on , for notational simplicity). Using and (5)
We can re- parameterize the LS criterion as :
The solution to the minimization problem is well known :
The power of the sinusoidal component with frequency , corresponding to estimations os a and b , is
given by:
7
8. The LS periodogram is accordingly given by:
RIAA:
The new method (RIAA) can be interpreted as an iteratively weighted LSP that makes use of a data-
dependent (adaptive) weighting matrix built from the most recent spectral estimate(real valued data).
The amplitude and phase estimation (APES) method for uniformly sampled data, has significantly less
leakage ( local and global) than the periodogram.
Here we extend APES to the non-uniformly sampled data case(RIAA).
Paper presents a procedure for obtaining a parametric spectral estimate, from the nonparametric
estimate, by means of a Bayesian information criterion (BIC).
Both LSP and RIAA provide nonparametric spectral estimates in the form of an estimated
periodogram. We use the frequencies and amplitude corresponding to the dominant peaks of (first the
largest one,...) in a Bayesian information criterion (BIC), to decide which peaks we should retain and
which ones we can discard.
The use of BIC for the said purpose can be viewed as a way of testing the significance of the dominant
peaks of the periodograms.
Using this notation, we can rewrite the LS fitting criterion in the following vector form :
8
9. Assuming that Qk is available, and is invertible, it would make sense to consider the following
weighted LS (WLS) criterion:
Indeed, it is well known that the estimate of θ obtained by minimizing last equation is more
accurate, under quite general conditions and is given by:
Weighted Least Square (WLS) periodogram can be defined as:
The PWLS estimate require inversion of a matrix , this would be computationally intensive task.
To reduce the computational complexity , we define a new matrix:
And we will have:
and:
Which gives us:
which is computationally simpler and matrix inverse needs to be computed only once for all values of
k=1,...K
Here we explain how to resolve the problem that Γ depends on the θ quantities that we want to
estimate, and consequently that can not be implemented directly.
The only apparent solution to this problem is an iterative process, the proposed RIAA algorithm below
will address this issue.
9
10. In most applications, the RIAA algorithm is expected to require no more than 10–20 iterations .
RIAA Performance:
Here we provide some insights into expected behavior of RIAA. In particular, we explain intuitively
why RIAA is expected to have less (both local and global) leakage than LSP.
RIAA estimates residual matrix Qk using a theoretical formula , along with the most recent spectral
estimate available; in the spectral analysis problem considered in this paper, we dispose of only one
realization of y.
Now let define Hk define a matrix as solution to the following constrained minimization problem:
where f is a monotonically increasing function on the domain of positive definite matrices. We use
Hk to obtain an estimate θ :
Now we show the solution to above minimization problem is given by:
10
11. We can write this in the following form:
which is equivalent to:
Because the matrix in the left-hand side of above is positive semi-definite, the proof that Hk is the
solution to WLS estimate is concluded.
The intuition to how WLS estimate reduces leakes is interesting.
The Hk matrix that solves optimization problem can be viewed as a “filter” that passes the sinusoidal
component of current interest (with frequency ω ) without any distortion and attenuates all the
other components in as much as possible.
To illustrate this property of Hk, we use the fact that, by assumption, the data contains a finite (usually
small) number of sinusoidal components. This means that there are only a limited number of
frequencies which contribute significant terms to Qk .
Let Wp be one of these frequencies. Then Hk should be nearly orthogonal to Ap , which means that Hk
acts as a filter for any strong sinusoidal component in whose frequency is different from Wk. This
observation explains why RIAA can be expected to have significantly reduced leakage problems
compared with LSP.
Bayesian Information Criterion (BIC):
The Bayesian Information Criterion (BIC) rule is an statistical tests of hypothesis testing to decide if
most dominant peak of LSP is significant.
To explain how this can be done, we sort “frequencies, amplitude and phase” related parameters
corresponding to the largest peaks denote the values taken by either the LSP or the RIAA periodogram
at the points of the frequency grid as shown below (M largest peaks) :
11
12. Under assumptions that the data sequence consists of a finite number of sinusoidal components and of
normal white noise, and these values are the maximum likelihood (ML)of “frequency , amplitude and
phase”, the BIC rule, estimates M as:
BIC is made of two terms:
1)LS data fitting term that decreases as M increases.
2)Complexity penalization term which increases with increasing M.
Therefore BIC estimate is a tradeoff between in-sample fitting accuracy and complexity of the
sinusoidal data description.
Sampling pattern design
Method for designing an optimal sampling pattern that minimizes an objective function based on the
spectral window.
1)Assume that a sufficient number of observations are already available, from which we can get a
reasonably accurate spectral estimate.
2)Make use of this spectral estimate to design the sampling times when future measurements should be
performed (objective Function optimization).
Conclusion:
LSP and RIAA are nonparametric methods that can be used for the spectral analysis of general data
sequences with both continuous and discrete spectra. However, they are most suitable for data
sequences with discrete spectra (i.e., sinusoidal data) which is the case we emphasized in this paper.
For the latter type of data, we presented a procedure for obtaining a parametric spectral estimate, from
the LSP or RIAA nonparametric estimate, by means of a Bayesian information criterion (BIC).
The use of BIC for the said purpose can be viewed as a way of testing the significance of the dominant
12
13. peaks of the LS or RIAA periodograms, a problem for which there was hardly any satisfactory solution
available
We also discussed a possible strategy for designing the sampling pattern of future measurements, based
on the spectral estimate obtained from the already available observations.
Appendix: Matlab simulation for periodogram:
%=============================================================
% program pdf_estimate_test.m
%=============================================================
age=[0:600]';
rand('state',0);
ager=age+0.3*rand(size(age))-.15;
ager(1)=age(1);
ager(601)=age(601);
depth=age/10; %creates depth between 0 and 60
bkg=interp1([0:10:600],rand(61,1),ager);
f1=1/95;
f2=1/125;
sig=cos(2*pi*f1*ager)+cos(2*pi*f2*ager+pi); %??
o18=sig+bkg;
%pick frequencies to evaluate spectral power
freq=[0:0.0001:0.02]';
power=lomb(age,o18,freq);
power(1)=0;
%normalize to average 1
power = power/std(power);
%plot the results
figure;
plot(freq,power);
xlabel('frequency(cycle/kyr)');
ylabel('spectral power');
%=============================================================
% program pdf_estimate_test.m
% This program should be saved with the name pdf_estimate.m
% Calculates periodogram
%=============================================================
function power = pdf_estimate(t,y,freq)
%set constants
nfreq=length(freq);
fmax=freq(nfreq);
fmin=freq(1);
power=zeros(nfreq,1);
13
15. References
1) Spectral Analysis of Nonuniformly Sampled Data: Petre Stoica , Jian-Li, Hao-Ha
2) J. Li and P. Stoica, “An adaptive filtering approach to spectral estimation
3) EEL 6537 { Spectral Estimation Jian Li ,Department of Electrical and Computer
Engineering University of Florida
15