It is an experimental design problem in which there are two Poisson sources with two possible and known rates, and one counter. Through a switch, the counter can observe the sources individually or the counts can be combined so that the counter observes the sum of the two. The sensor scheduling problem is to determine an optimal proportion of the available time to be allocated toward individual and joint sensing, under a total time
constraint. Two different metrics are used for optimization: mutual information between the sources and the observed counts, and probability of detection for the associated source detection problem. Our results, which are primarily computational, indicate similar but not identical results under the two cost functions.
The document discusses von Neumann entropy in quantum computation. It provides definitions of key terms like von Neumann entropy, density matrix, and computational complexity theory. Von Neumann entropy extends concepts of classical entropy to quantum mechanics and characterizes the classical and quantum information capacities of an ensemble. It quantifies the degree of mixing of a quantum state and how much a state departs from a pure state. The von Neumann entropy of a system is computed using the density matrix and eigendecomposition of the system's quantum state.
Behavior study of entropy in a digital image through an iterative algorithmijscmcj
Image segmentation is a critical step in computer vision tasks constituting an essential issue for pattern recognition and visual interpretation. In this paper, we study the behavior of entropy in digital images through an iterative algorithm of mean shift filtering. The order of a digital image in gray levels is defined. The behavior of Shannon entropy is analyzed and then compared, taking into account the number of iterations of our algorithm, with the maximum entropy that could be achieved under the same order. The use of equivalence classes it induced, which allow us to interpret entropy as a hyper-surface in real m dimensional space. The difference of the maximum entropy of order n and the entropy of the image is used to group the the iterations, in order to caractrizes the performance of the algorithm.
A COMPARATIVE STUDY ON DISTANCE MEASURING APPROACHES FOR CLUSTERINGIJORCS
Clustering plays a vital role in the various areas of research like Data Mining, Image Retrieval, Bio-computing and many a lot. Distance measure plays an important role in clustering data points. Choosing the right distance measure for a given dataset is a biggest challenge. In this paper, we study various distance measures and their effect on different clustering. This paper surveys existing distance measures for clustering and present a comparison between them based on application domain, efficiency, benefits and drawbacks. This comparison helps the researchers to take quick decision about which distance measure to use for clustering. We conclude this work by identifying trends and challenges of research and development towards clustering.
The document discusses applying random distortion testing (RDT) in a spectral clustering context. RDT is a framework for guaranteeing a false alarm probability threshold in detecting distorted data using threshold-based tests. The document introduces RDT and spectral clustering concepts. It then proposes using the p-value from RDT as the similarity function or kernel in spectral clustering, to handle disturbed data. Experiments are conducted to compare the partitioning performance of the RDT p-value kernel to the Gaussian kernel.
On the approximation of the sum of lognormals by a log skew normal distributionIJCNCJournal
Several methods have been proposed to approximate the sum of lognormal RVs. However the accuracy of each method relies highly on the region of the resulting distribution being examined, and the individual lognormal parameters, i.e., mean and variance. There is no such method which can provide the needed accuracy for all cases. This paper propose a universal yet very simple approximation method for the sum of Lognormals based on log skew normal approximation. The main contribution on this work is to propose an analytical method for log skew normal parameters estimation. The proposed method provides highly accurate approximation to the sum of lognormal distributions over the whole range of dB spreads for any correlation coefficient. Simulation results show that our method outperforms all previously proposed methods and provides an accuracy within 0.01 dB for all cases.
Robust Fuzzy Data Clustering In An Ordinal Scale Based On A Similarity MeasureIJRES Journal
This paper is devoted to processing data given in an ordinal scale. A new objective function of a
special type is introduced. A group of robust fuzzy clustering algorithms based on the similarity measure is
introduced.
Heuristic Sensing Schemes for Four-Target Detection in Time-Constrained Vecto...sipij
In this work we investigate the different sensing schemes for detection of four targets as observed through a vector Poisson and Gaussian channels when the sensing time resource is limited and the source signals can be observed through a variety of sum combinations during that fixed time. For this purpose we can maximize the mutual information or the detection probability with respect to the time allocated to different sum combinations, for a given total fixed time. It is observed that for both Poisson and Gaussian channels; mutual information and Bayes risk with 0 − 1 cost are not necessarily consistent with each other. Concavity of mutual information between input and output, for certain sensing schemes, in Poisson channel and Gaussian channel is shown to be concave w.r.t given times as linear time constraint is imposed. No optimal sensing scheme for any of the two channels is investigated in this work.
Blind Audio Source Separation (Bass): An Unsuperwised Approach IJEEE
Audio processing is an area where signal separation is considered as a fascinating works, potentially offering a vivid range of new scope and experience in professional and personal context. The objective of Blind Audio Source Separation is to separate audio signals from multiple independent sources in an unknown mixing environment. This paper addresses the key challenges in BASS and unsupervised approaches to counter these challenges. Comparative performance analysis of Fast-ICA algorithm and Convex Divergence ICA for Blind Source Separation is presented with the help of experimental result. Result reflects Convex Divergence ICA with α=-1 gives more accurate estimate in comparison of Fast ICA..
The document discusses von Neumann entropy in quantum computation. It provides definitions of key terms like von Neumann entropy, density matrix, and computational complexity theory. Von Neumann entropy extends concepts of classical entropy to quantum mechanics and characterizes the classical and quantum information capacities of an ensemble. It quantifies the degree of mixing of a quantum state and how much a state departs from a pure state. The von Neumann entropy of a system is computed using the density matrix and eigendecomposition of the system's quantum state.
Behavior study of entropy in a digital image through an iterative algorithmijscmcj
Image segmentation is a critical step in computer vision tasks constituting an essential issue for pattern recognition and visual interpretation. In this paper, we study the behavior of entropy in digital images through an iterative algorithm of mean shift filtering. The order of a digital image in gray levels is defined. The behavior of Shannon entropy is analyzed and then compared, taking into account the number of iterations of our algorithm, with the maximum entropy that could be achieved under the same order. The use of equivalence classes it induced, which allow us to interpret entropy as a hyper-surface in real m dimensional space. The difference of the maximum entropy of order n and the entropy of the image is used to group the the iterations, in order to caractrizes the performance of the algorithm.
A COMPARATIVE STUDY ON DISTANCE MEASURING APPROACHES FOR CLUSTERINGIJORCS
Clustering plays a vital role in the various areas of research like Data Mining, Image Retrieval, Bio-computing and many a lot. Distance measure plays an important role in clustering data points. Choosing the right distance measure for a given dataset is a biggest challenge. In this paper, we study various distance measures and their effect on different clustering. This paper surveys existing distance measures for clustering and present a comparison between them based on application domain, efficiency, benefits and drawbacks. This comparison helps the researchers to take quick decision about which distance measure to use for clustering. We conclude this work by identifying trends and challenges of research and development towards clustering.
The document discusses applying random distortion testing (RDT) in a spectral clustering context. RDT is a framework for guaranteeing a false alarm probability threshold in detecting distorted data using threshold-based tests. The document introduces RDT and spectral clustering concepts. It then proposes using the p-value from RDT as the similarity function or kernel in spectral clustering, to handle disturbed data. Experiments are conducted to compare the partitioning performance of the RDT p-value kernel to the Gaussian kernel.
On the approximation of the sum of lognormals by a log skew normal distributionIJCNCJournal
Several methods have been proposed to approximate the sum of lognormal RVs. However the accuracy of each method relies highly on the region of the resulting distribution being examined, and the individual lognormal parameters, i.e., mean and variance. There is no such method which can provide the needed accuracy for all cases. This paper propose a universal yet very simple approximation method for the sum of Lognormals based on log skew normal approximation. The main contribution on this work is to propose an analytical method for log skew normal parameters estimation. The proposed method provides highly accurate approximation to the sum of lognormal distributions over the whole range of dB spreads for any correlation coefficient. Simulation results show that our method outperforms all previously proposed methods and provides an accuracy within 0.01 dB for all cases.
Robust Fuzzy Data Clustering In An Ordinal Scale Based On A Similarity MeasureIJRES Journal
This paper is devoted to processing data given in an ordinal scale. A new objective function of a
special type is introduced. A group of robust fuzzy clustering algorithms based on the similarity measure is
introduced.
Heuristic Sensing Schemes for Four-Target Detection in Time-Constrained Vecto...sipij
In this work we investigate the different sensing schemes for detection of four targets as observed through a vector Poisson and Gaussian channels when the sensing time resource is limited and the source signals can be observed through a variety of sum combinations during that fixed time. For this purpose we can maximize the mutual information or the detection probability with respect to the time allocated to different sum combinations, for a given total fixed time. It is observed that for both Poisson and Gaussian channels; mutual information and Bayes risk with 0 − 1 cost are not necessarily consistent with each other. Concavity of mutual information between input and output, for certain sensing schemes, in Poisson channel and Gaussian channel is shown to be concave w.r.t given times as linear time constraint is imposed. No optimal sensing scheme for any of the two channels is investigated in this work.
Blind Audio Source Separation (Bass): An Unsuperwised Approach IJEEE
Audio processing is an area where signal separation is considered as a fascinating works, potentially offering a vivid range of new scope and experience in professional and personal context. The objective of Blind Audio Source Separation is to separate audio signals from multiple independent sources in an unknown mixing environment. This paper addresses the key challenges in BASS and unsupervised approaches to counter these challenges. Comparative performance analysis of Fast-ICA algorithm and Convex Divergence ICA for Blind Source Separation is presented with the help of experimental result. Result reflects Convex Divergence ICA with α=-1 gives more accurate estimate in comparison of Fast ICA..
A COMPREHENSIVE ANALYSIS OF QUANTUM CLUSTERING : FINDING ALL THE POTENTIAL MI...IJDKP
Quantum clustering (QC), is a data clustering algorithm based on quantum mechanics which is
accomplished by substituting each point in a given dataset with a Gaussian. The width of the Gaussian is a
σ value, a hyper-parameter which can be manually defined and manipulated to suit the application.
Numerical methods are used to find all the minima of the quantum potential as they correspond to cluster
centers. Herein, we investigate the mathematical task of expressing and finding all the roots of the
exponential polynomial corresponding to the minima of a two-dimensional quantum potential. This is an
outstanding task because normally such expressions are impossible to solve analytically. However, we
prove that if the points are all included in a square region of size σ, there is only one minimum. This bound
is not only useful in the number of solutions to look for, by numerical means, it allows to to propose a new
numerical approach “per block”. This technique decreases the number of particles by approximating some
groups of particles to weighted particles. These findings are not only useful to the quantum clustering
problem but also for the exponential polynomials encountered in quantum chemistry, Solid-state Physics
and other applications.
New Data Association Technique for Target Tracking in Dense Clutter Environme...CSCJournals
Improving data association process by increasing the probability of detecting valid data points (measurements obtained from radar/sonar system) in the presence of noise for target tracking are discussed in manuscript. We develop a novel algorithm by filtering gate for target tracking in dense clutter environment. This algorithm is less sensitive to false alarm (clutter) in gate size than conventional approaches as probabilistic data association filter (PDAF) which has data association algorithm that begin to fail due to the increase in the false alarm rate or low probability of target detection. This new selection filtered gate method combines a conventional threshold based algorithm with geometric metric measure based on one type of the filtering methods that depends on the idea of adaptive clutter suppression methods. An adaptive search based on the distance threshold measure is then used to detect valid filtered data point for target tracking. Simulation results demonstrate the effectiveness and better performance when compared to conventional algorithm.
The document discusses calculating and plotting the allocation of entropy for bipartite and tripartite quantum systems. It provides tables of entropy calculations for a bipartite system of |0> and |1> states, and checks that the results satisfy the subadditivity inequality. It also outlines the methodology to perform similar calculations and plots for other systems to visualize the convex cones of entropy allocation.
Receding Horizon Stochastic Control Algorithms for Sensor Management ACC 2010Darin Hitchings, Ph.D.
This document summarizes a research paper about developing receding horizon control algorithms for sensor management problems. The algorithms aim to efficiently control information acquisition from smart sensors that can dynamically change their observations. The proposed approaches use a stochastic control approximation and receding horizon optimization to obtain feasible sensor schedules from mixed strategies. Simulations showed the receding horizon algorithms achieved near-optimal performance comparable to a theoretical lower bound, and that a modest horizon was sufficient.
This document presents a new method called the Real-valued Iterative Adaptive Approach (RIAA) for estimating the power spectral density of nonuniformly sampled data. It aims to improve upon the periodogram, which suffers from poor resolution and leakage. RIAA is an iteratively weighted least squares periodogram that uses an adaptive weighting matrix built from the most recent spectral estimate. It is shown to have significantly less leakage than the least squares periodogram through its use of an adaptive filter. The Bayesian Information Criterion is also discussed as a way to test the significance of peaks in the estimated spectrum.
Quantum information as the information of infinite series Vasil Penchev
Quantum information is equivalent to that generalization of the classical information from finite to infinite series or collections
The quantity of information is the quantity of choices measured in the units of elementary choice
The qubit is that generalization of bit, which is a choice among a continuum of alternatives
The axiom of choice is necessary for quantum information: The coherent state is transformed into a well-ordered series of results in time after measurement
The quantity of quantum information is the ordinal corresponding to the infinity series in question
Trust Region Algorithm - Bachelor DissertationChristian Adom
The document summarizes the trust region algorithm for solving unconstrained optimization problems. It begins by introducing trust region methods and comparing them to line search algorithms. The basic trust region algorithm is then outlined, which approximates the objective function within a region using a quadratic model at each iteration. It discusses solving the trust region subproblem to find a step that minimizes the model within the trust region. Finally, it introduces the Cauchy point and double dogleg step as methods for solving the subproblem.
Quantum information as the information of infinite seriesVasil Penchev
The quantum information introduced by quantum mechanics is equivalent to that generalization of the classical information from finite to infinite series or collections. The quantity of information is the quantity of choices measured in the units of elementary choice. The qubit, can be interpreted as that generalization of bit, which is a choice among a continuum of alternatives. The axiom of choice is necessary for quantum information. The coherent state is transformed into a well-ordered series of results in time after measurement. The quantity of quantum information is the ordinal corresponding to the infinity series in question.
This document summarizes a research paper that proposes a new method to accelerate the nearest neighbor search step of the k-means clustering algorithm. The k-means algorithm is computationally expensive due to calculating distances between data points and cluster centers. The proposed method uses geometric relationships between data points and centers to reject centers that are unlikely to be the nearest neighbor, without decreasing clustering accuracy. Experimental results showed the method significantly reduced the number of distance computations required.
International Journal of Engineering and Science Invention (IJESI)inventionjournals
This document discusses multidimensional clustering methods for data mining and their industrial applications. It begins with an introduction to clustering, including definitions and goals. Popular clustering algorithms are described, such as K-means, fuzzy C-means, hierarchical clustering, and mixture of Gaussians. Distance measures and their importance in clustering are covered. The K-means and fuzzy C-means algorithms are explained in detail. Examples are provided to illustrate fuzzy C-means clustering. Finally, applications of clustering algorithms in fields such as marketing, biology, and earth sciences are mentioned.
This document discusses a hybridization of the Magnetic Charge System Search (MCSS) method for efficient data clustering. MCSS is a meta-heuristic algorithm inspired by electromagnetic theory that has shown potential but also has issues with convergence rate and getting stuck in local optima. The authors propose a Hybrid MCSS (HMCSS) that incorporates a local search strategy and differential evolution inspired updating to improve convergence. An experiment on benchmark functions and real clustering problems shows HMCSS provides better results than existing algorithms and enhances MCSS convergence.
Histogram-Based Method for Effective Initialization of the K-Means Clustering...Gingles Caroline
This document proposes a histogram-based method for initializing cluster centers in k-means clustering. The method works by recursively finding the most populated histogram bin for each attribute dimension, using the bin centroid as the coordinate for that dimension. This focuses the cluster centers on dense regions of the data distribution. The method is linear in complexity, deterministic, and order-invariant, making it suitable for large datasets where other initialization methods are impractical or unreliable. Experimental results on UCI datasets show it outperforms the commonly used maximin initialization method.
The document describes using a Kalman filter for road map estimation and extended target tracking. It presents a framework that models extended objects using polynomials based on imagery sensor data. State-space models allow the use of Kalman filters for tracking. The Kalman filter provides optimal, recursive estimates by minimizing estimated error covariance. It predicts the next state and corrects the estimate based on new measurements. Simulation results demonstrate tracking an object using the Kalman filter compared to a conventional method.
ADAPTIVE BLIND MULTIUSER DETECTION UNDER IMPULSIVE NOISE USING PRINCIPAL COMP...csandit
In this paper we consider blind signal detection for an asynchronous code division multiple access (CDMA) system with Principal component analysis (PCA) in impulsive noise. The blind multiuser detector requires no training sequences compared with the conventional multiuser detection receiver. The proposed PCA blind multiuser detector is robust when compared with knowledge based signature waveforms and the timing of the user of interest. PCA is a statistical method for reducing the dimension of data set, spectral decomposition of the covariance matrix of the dataset i.e first and second order statistics are estimated.
Principal component analysis makes no assumption on the independence of the data vectors PCA searches for linear combinations with the largest variances and when several linear combinations are needed, it considers variances in decreasing order of importance. PCA
improves SNR of signals used for differential side channel analysis. In different to other approaches, the linear minimum mean-square-error (MMSE) detector is obtained blindly; the detector does not use any training sequence like in subspace methods to detect multi user
receiver. The algorithm need not estimate the subspace rank in order to reduce the computational complexity. Simulation results show that the new algorithm offers substantial performance gains over the traditional subspace methods.
Adaptive blind multiuser detection under impulsive noise using principal comp...csandit
The document describes an adaptive blind multiuser detection method for asynchronous code division multiple access (CDMA) systems using principal component analysis (PCA) in impulsive noise environments. PCA is used to extract the principal components from the received signal without requiring training sequences or prior knowledge of channel characteristics. The PCA blind multiuser detector provides robust performance compared to knowledge-based detectors when signature waveforms and timing offsets of users are unknown. Simulation results show the proposed PCA method offers substantial gains over traditional subspace methods for multiuser detection.
This document describes rsnSieve, a MATLAB application that automatically selects resting state networks from fMRI data. It uses independent component analysis to decompose fMRI signals, then applies several processing steps like Pearson's index evaluation, k-means clustering, power spectral analysis to identify and select components corresponding to resting state networks. The selected networks are then used for further analysis.
Further results on the joint time delay and frequency estimation without eige...IJCNCJournal
Joint Time Delay and Frequency Estimation (JTDFE) problem of complex sinusoidal signals received at
two separated sensors is an attractive problem that has been considered for several engineering
applications. In this paper, a high resolution null (noise) subspace method without eigenvalue
decomposition is proposed. The direct data Matrix is replaced by an upper triangular matrix obtained from
Rank-Revealing LU (RRLU) factorization. The RRLU provides accurate information about the rank and the
numerical null space which make it a valuable tool in numerical linear algebra.The proposed novel method
decreases the computational complexity of JTDFE approximately to the half compared with RRQR
methods. The proposed method generates estimates of the unknown parameters which are based on the
observation and/or covariance matrices. This leads to a significant improvement in the computational load.
Computer simulations are included in this paper to demonstrate the proposed method.
MARGINAL PERCEPTRON FOR NON-LINEAR AND MULTI CLASS CLASSIFICATION ijscai
Generalization error of classifier can be reduced by larger margin of separating hyperplane. The proposed classification algorithm implements margin in classical perceptron algorithm, to reduce generalized errors by maximizing margin of separating hyperplane. Algorithm uses the same updation rule with the perceptron, to converge in a finite number of updates to solutions, possessing any desirable fraction of the margin. This solution is again optimized to get maximum possible margin. The algorithm can process linear, non-linear and multi class problems. Experimental results place the proposed classifier equivalent to the support vector machine and even better in some cases. Some preliminary experimental results are briefly discussed.
A Novel Algorithm to Estimate Closely Spaced Source DOA IJECEIAES
In order to improve resolution and direction of arrival (DOA) estimation of two closely spaced sources, in context of array processing, a new algorithm is presented. However, the proposed algorithm combines both spatial sampling technic to widen the resolution and a high resolution method which is the Multiple Signal Classification (MUSIC) to estimate the DOA of two closely spaced sources impinging on the far-field of Uniform Linear Array (ULA). Simulations examples are discussed to demonstrate the performance and the effectiveness of the proposed approach (referred as Spatial sampling MUSIC SS-MUSIC) compared to the classical MUSIC method when it’s used alone in this context.
HIGHLY ACCURATE LOG SKEW NORMAL APPROXIMATION TO THE SUM OF CORRELATED LOGNOR...cscpconf
Several methods have been proposed to approximate the sum of correlated lognormal RVs.
However the accuracy of each method relies highly on the region of the resulting distribution
being examined, and the individual lognormal parameters, i.e., mean and variance. There is no
such method which can provide the needed accuracy for all cases. This paper propose a
universal yet very simple approximation method for the sum of correlated lognormals based on
log skew normal approximation. The main contribution on this work is to propose an analytical
method for log skew normal parameters estimation. The proposed method provides highly
accurate approximation to the sum of correlated lognormal distributions over the whole range
of dB spreads for any correlation coefficient. Simulation results show that our method
outperforms all previously proposed methods and provides an accuracy within 0.01 dB for all
cases.
Several methods have been proposed to approximate the sum of correlated lognormal RVs.
However the accuracy of each method relies highly on the region of the resulting distribution
being examined, and the individual lognormal parameters, i.e., mean and variance. There is no
such method which can provide the needed accuracy for all cases. This paper propose a
universal yet very simple approximation method for the sum of correlated lognormals based on
log skew normal approximation. The main contribution on this work is to propose an analytical
method for log skew normal parameters estimation. The proposed method provides highly
accurate approximation to the sum of correlated lognormal distributions over the whole range
of dB spreads for any correlation coefficient. Simulation results show that our method
outperforms all previously proposed methods and provides an accuracy within 0.01 dB for all
cases.
In order to improve sensing performance when the noise variance is not known, this paper considers a so-called
blind spectrum sensing technique that is based on eigenvalue models. In this paper, we employed the spiked population
models in order to identify the miss detection probability. At first, we try to estimate the unknown noise variance
based on the blind measurements at a secondary location. We then investigate the performance of detection, in terms
of both theoretical and empirical aspects, after applying this estimated noise variance result. In addition, we study the
effects of the number of SUs and the number of samples on the spectrum sensing performance.
A COMPREHENSIVE ANALYSIS OF QUANTUM CLUSTERING : FINDING ALL THE POTENTIAL MI...IJDKP
Quantum clustering (QC), is a data clustering algorithm based on quantum mechanics which is
accomplished by substituting each point in a given dataset with a Gaussian. The width of the Gaussian is a
σ value, a hyper-parameter which can be manually defined and manipulated to suit the application.
Numerical methods are used to find all the minima of the quantum potential as they correspond to cluster
centers. Herein, we investigate the mathematical task of expressing and finding all the roots of the
exponential polynomial corresponding to the minima of a two-dimensional quantum potential. This is an
outstanding task because normally such expressions are impossible to solve analytically. However, we
prove that if the points are all included in a square region of size σ, there is only one minimum. This bound
is not only useful in the number of solutions to look for, by numerical means, it allows to to propose a new
numerical approach “per block”. This technique decreases the number of particles by approximating some
groups of particles to weighted particles. These findings are not only useful to the quantum clustering
problem but also for the exponential polynomials encountered in quantum chemistry, Solid-state Physics
and other applications.
New Data Association Technique for Target Tracking in Dense Clutter Environme...CSCJournals
Improving data association process by increasing the probability of detecting valid data points (measurements obtained from radar/sonar system) in the presence of noise for target tracking are discussed in manuscript. We develop a novel algorithm by filtering gate for target tracking in dense clutter environment. This algorithm is less sensitive to false alarm (clutter) in gate size than conventional approaches as probabilistic data association filter (PDAF) which has data association algorithm that begin to fail due to the increase in the false alarm rate or low probability of target detection. This new selection filtered gate method combines a conventional threshold based algorithm with geometric metric measure based on one type of the filtering methods that depends on the idea of adaptive clutter suppression methods. An adaptive search based on the distance threshold measure is then used to detect valid filtered data point for target tracking. Simulation results demonstrate the effectiveness and better performance when compared to conventional algorithm.
The document discusses calculating and plotting the allocation of entropy for bipartite and tripartite quantum systems. It provides tables of entropy calculations for a bipartite system of |0> and |1> states, and checks that the results satisfy the subadditivity inequality. It also outlines the methodology to perform similar calculations and plots for other systems to visualize the convex cones of entropy allocation.
Receding Horizon Stochastic Control Algorithms for Sensor Management ACC 2010Darin Hitchings, Ph.D.
This document summarizes a research paper about developing receding horizon control algorithms for sensor management problems. The algorithms aim to efficiently control information acquisition from smart sensors that can dynamically change their observations. The proposed approaches use a stochastic control approximation and receding horizon optimization to obtain feasible sensor schedules from mixed strategies. Simulations showed the receding horizon algorithms achieved near-optimal performance comparable to a theoretical lower bound, and that a modest horizon was sufficient.
This document presents a new method called the Real-valued Iterative Adaptive Approach (RIAA) for estimating the power spectral density of nonuniformly sampled data. It aims to improve upon the periodogram, which suffers from poor resolution and leakage. RIAA is an iteratively weighted least squares periodogram that uses an adaptive weighting matrix built from the most recent spectral estimate. It is shown to have significantly less leakage than the least squares periodogram through its use of an adaptive filter. The Bayesian Information Criterion is also discussed as a way to test the significance of peaks in the estimated spectrum.
Quantum information as the information of infinite series Vasil Penchev
Quantum information is equivalent to that generalization of the classical information from finite to infinite series or collections
The quantity of information is the quantity of choices measured in the units of elementary choice
The qubit is that generalization of bit, which is a choice among a continuum of alternatives
The axiom of choice is necessary for quantum information: The coherent state is transformed into a well-ordered series of results in time after measurement
The quantity of quantum information is the ordinal corresponding to the infinity series in question
Trust Region Algorithm - Bachelor DissertationChristian Adom
The document summarizes the trust region algorithm for solving unconstrained optimization problems. It begins by introducing trust region methods and comparing them to line search algorithms. The basic trust region algorithm is then outlined, which approximates the objective function within a region using a quadratic model at each iteration. It discusses solving the trust region subproblem to find a step that minimizes the model within the trust region. Finally, it introduces the Cauchy point and double dogleg step as methods for solving the subproblem.
Quantum information as the information of infinite seriesVasil Penchev
The quantum information introduced by quantum mechanics is equivalent to that generalization of the classical information from finite to infinite series or collections. The quantity of information is the quantity of choices measured in the units of elementary choice. The qubit, can be interpreted as that generalization of bit, which is a choice among a continuum of alternatives. The axiom of choice is necessary for quantum information. The coherent state is transformed into a well-ordered series of results in time after measurement. The quantity of quantum information is the ordinal corresponding to the infinity series in question.
This document summarizes a research paper that proposes a new method to accelerate the nearest neighbor search step of the k-means clustering algorithm. The k-means algorithm is computationally expensive due to calculating distances between data points and cluster centers. The proposed method uses geometric relationships between data points and centers to reject centers that are unlikely to be the nearest neighbor, without decreasing clustering accuracy. Experimental results showed the method significantly reduced the number of distance computations required.
International Journal of Engineering and Science Invention (IJESI)inventionjournals
This document discusses multidimensional clustering methods for data mining and their industrial applications. It begins with an introduction to clustering, including definitions and goals. Popular clustering algorithms are described, such as K-means, fuzzy C-means, hierarchical clustering, and mixture of Gaussians. Distance measures and their importance in clustering are covered. The K-means and fuzzy C-means algorithms are explained in detail. Examples are provided to illustrate fuzzy C-means clustering. Finally, applications of clustering algorithms in fields such as marketing, biology, and earth sciences are mentioned.
This document discusses a hybridization of the Magnetic Charge System Search (MCSS) method for efficient data clustering. MCSS is a meta-heuristic algorithm inspired by electromagnetic theory that has shown potential but also has issues with convergence rate and getting stuck in local optima. The authors propose a Hybrid MCSS (HMCSS) that incorporates a local search strategy and differential evolution inspired updating to improve convergence. An experiment on benchmark functions and real clustering problems shows HMCSS provides better results than existing algorithms and enhances MCSS convergence.
Histogram-Based Method for Effective Initialization of the K-Means Clustering...Gingles Caroline
This document proposes a histogram-based method for initializing cluster centers in k-means clustering. The method works by recursively finding the most populated histogram bin for each attribute dimension, using the bin centroid as the coordinate for that dimension. This focuses the cluster centers on dense regions of the data distribution. The method is linear in complexity, deterministic, and order-invariant, making it suitable for large datasets where other initialization methods are impractical or unreliable. Experimental results on UCI datasets show it outperforms the commonly used maximin initialization method.
The document describes using a Kalman filter for road map estimation and extended target tracking. It presents a framework that models extended objects using polynomials based on imagery sensor data. State-space models allow the use of Kalman filters for tracking. The Kalman filter provides optimal, recursive estimates by minimizing estimated error covariance. It predicts the next state and corrects the estimate based on new measurements. Simulation results demonstrate tracking an object using the Kalman filter compared to a conventional method.
ADAPTIVE BLIND MULTIUSER DETECTION UNDER IMPULSIVE NOISE USING PRINCIPAL COMP...csandit
In this paper we consider blind signal detection for an asynchronous code division multiple access (CDMA) system with Principal component analysis (PCA) in impulsive noise. The blind multiuser detector requires no training sequences compared with the conventional multiuser detection receiver. The proposed PCA blind multiuser detector is robust when compared with knowledge based signature waveforms and the timing of the user of interest. PCA is a statistical method for reducing the dimension of data set, spectral decomposition of the covariance matrix of the dataset i.e first and second order statistics are estimated.
Principal component analysis makes no assumption on the independence of the data vectors PCA searches for linear combinations with the largest variances and when several linear combinations are needed, it considers variances in decreasing order of importance. PCA
improves SNR of signals used for differential side channel analysis. In different to other approaches, the linear minimum mean-square-error (MMSE) detector is obtained blindly; the detector does not use any training sequence like in subspace methods to detect multi user
receiver. The algorithm need not estimate the subspace rank in order to reduce the computational complexity. Simulation results show that the new algorithm offers substantial performance gains over the traditional subspace methods.
Adaptive blind multiuser detection under impulsive noise using principal comp...csandit
The document describes an adaptive blind multiuser detection method for asynchronous code division multiple access (CDMA) systems using principal component analysis (PCA) in impulsive noise environments. PCA is used to extract the principal components from the received signal without requiring training sequences or prior knowledge of channel characteristics. The PCA blind multiuser detector provides robust performance compared to knowledge-based detectors when signature waveforms and timing offsets of users are unknown. Simulation results show the proposed PCA method offers substantial gains over traditional subspace methods for multiuser detection.
This document describes rsnSieve, a MATLAB application that automatically selects resting state networks from fMRI data. It uses independent component analysis to decompose fMRI signals, then applies several processing steps like Pearson's index evaluation, k-means clustering, power spectral analysis to identify and select components corresponding to resting state networks. The selected networks are then used for further analysis.
Further results on the joint time delay and frequency estimation without eige...IJCNCJournal
Joint Time Delay and Frequency Estimation (JTDFE) problem of complex sinusoidal signals received at
two separated sensors is an attractive problem that has been considered for several engineering
applications. In this paper, a high resolution null (noise) subspace method without eigenvalue
decomposition is proposed. The direct data Matrix is replaced by an upper triangular matrix obtained from
Rank-Revealing LU (RRLU) factorization. The RRLU provides accurate information about the rank and the
numerical null space which make it a valuable tool in numerical linear algebra.The proposed novel method
decreases the computational complexity of JTDFE approximately to the half compared with RRQR
methods. The proposed method generates estimates of the unknown parameters which are based on the
observation and/or covariance matrices. This leads to a significant improvement in the computational load.
Computer simulations are included in this paper to demonstrate the proposed method.
MARGINAL PERCEPTRON FOR NON-LINEAR AND MULTI CLASS CLASSIFICATION ijscai
Generalization error of classifier can be reduced by larger margin of separating hyperplane. The proposed classification algorithm implements margin in classical perceptron algorithm, to reduce generalized errors by maximizing margin of separating hyperplane. Algorithm uses the same updation rule with the perceptron, to converge in a finite number of updates to solutions, possessing any desirable fraction of the margin. This solution is again optimized to get maximum possible margin. The algorithm can process linear, non-linear and multi class problems. Experimental results place the proposed classifier equivalent to the support vector machine and even better in some cases. Some preliminary experimental results are briefly discussed.
A Novel Algorithm to Estimate Closely Spaced Source DOA IJECEIAES
In order to improve resolution and direction of arrival (DOA) estimation of two closely spaced sources, in context of array processing, a new algorithm is presented. However, the proposed algorithm combines both spatial sampling technic to widen the resolution and a high resolution method which is the Multiple Signal Classification (MUSIC) to estimate the DOA of two closely spaced sources impinging on the far-field of Uniform Linear Array (ULA). Simulations examples are discussed to demonstrate the performance and the effectiveness of the proposed approach (referred as Spatial sampling MUSIC SS-MUSIC) compared to the classical MUSIC method when it’s used alone in this context.
HIGHLY ACCURATE LOG SKEW NORMAL APPROXIMATION TO THE SUM OF CORRELATED LOGNOR...cscpconf
Several methods have been proposed to approximate the sum of correlated lognormal RVs.
However the accuracy of each method relies highly on the region of the resulting distribution
being examined, and the individual lognormal parameters, i.e., mean and variance. There is no
such method which can provide the needed accuracy for all cases. This paper propose a
universal yet very simple approximation method for the sum of correlated lognormals based on
log skew normal approximation. The main contribution on this work is to propose an analytical
method for log skew normal parameters estimation. The proposed method provides highly
accurate approximation to the sum of correlated lognormal distributions over the whole range
of dB spreads for any correlation coefficient. Simulation results show that our method
outperforms all previously proposed methods and provides an accuracy within 0.01 dB for all
cases.
Several methods have been proposed to approximate the sum of correlated lognormal RVs.
However the accuracy of each method relies highly on the region of the resulting distribution
being examined, and the individual lognormal parameters, i.e., mean and variance. There is no
such method which can provide the needed accuracy for all cases. This paper propose a
universal yet very simple approximation method for the sum of correlated lognormals based on
log skew normal approximation. The main contribution on this work is to propose an analytical
method for log skew normal parameters estimation. The proposed method provides highly
accurate approximation to the sum of correlated lognormal distributions over the whole range
of dB spreads for any correlation coefficient. Simulation results show that our method
outperforms all previously proposed methods and provides an accuracy within 0.01 dB for all
cases.
In order to improve sensing performance when the noise variance is not known, this paper considers a so-called
blind spectrum sensing technique that is based on eigenvalue models. In this paper, we employed the spiked population
models in order to identify the miss detection probability. At first, we try to estimate the unknown noise variance
based on the blind measurements at a secondary location. We then investigate the performance of detection, in terms
of both theoretical and empirical aspects, after applying this estimated noise variance result. In addition, we study the
effects of the number of SUs and the number of samples on the spectrum sensing performance.
This document presents a novel algorithm for classifying signals (glitches) that arise in gravitational wave channels of the Laser Interferometer Gravitational-Wave Observatory (LIGO). The algorithm uses Kohonen Self Organizing Feature Maps and discrete wavelet transform coefficients to classify glitches based on their morphology and other parameters like signal-to-noise ratio and duration. This low-latency algorithm aims to help the LIGO detector characterization group identify and mitigate noise sources more quickly.
OFFLINE SPIKE DETECTION USING TIME DEPENDENT ENTROPY ijbesjournal
This document describes an offline spike detection method using time-dependent entropy (TDE). TDE tracks changes in the entropy content of overlapping windows of neural data, as spikes are transient events that affect the signal's entropy. The method was tested on synthesized neural data containing embedded spike trains at various signal-to-noise ratios. Results showed TDE enabled detection of spikes with relatively lower false alarm rates compared to other methods. Key parameters for TDE include the window length, sliding lag, and entropy index value, which were optimized for spike detection performance.
Many algorithms have been developed to find sparse representation over redundant dictionaries or
transform. This paper presents a novel method on compressive sensing (CS)-based image compression
using sparse basis on CDF9/7 wavelet transform. The measurement matrix is applied to the three levels of
wavelet transform coefficients of the input image for compressive sampling. We have used three different
measurement matrix as Gaussian matrix, Bernoulli measurement matrix and random orthogonal matrix.
The orthogonal matching pursuit (OMP) and Basis Pursuit (BP) are applied to reconstruct each level of
wavelet transform separately. Experimental results demonstrate that the proposed method given better
quality of compressed image than existing methods in terms of proposed image quality evaluation indexes
and other objective (PSNR/UIQI/SSIM) measurements.
This document summarizes a research paper on a multi-user MIMO cognitive radio system that allows for simultaneous spectrum sensing and data transmission. The secondary receiver performs MMSE detection to decode signals from multiple secondary transmitters while also sensing the spectrum to detect potential primary activity. The analysis presents novel expressions for important metrics like detection probability, false alarm probability, and secondary transmission power under assumptions of Rayleigh fading, time-varying channels, and channel estimation errors. Numerical results verify the accuracy of the analysis.
The thesis aimed to advance knowledge in decentralized detection in wireless sensor networks. It developed efficient algorithms for designing decision rules at sensors to minimize error probability. It proved conditions where balanced rate allocation is optimal and applied this to sensor network models. It also formulated decentralized detection problems for energy harvesting sensor networks, developing analytical bounds and numerical design methods. The work provided computational and theoretical advances for optimal inference in distributed sensing applications.
A linear-Discriminant-Analysis-Based Approach to Enhance the Performance of F...CSCJournals
Spike sorting is of prime importance in neurophysiology and hence has received considerable attention. However, conventional methods suffer from the degradation of clustering results in the presence of high levels of noise contamination. This paper presents a scheme for taking advantage of automatic clustering and enhancing the feature extraction efficiency, especially for low-SNR spike data. The method employs linear discriminant analysis based on a fuzzy c-means (FCM) algorithm. Simulated spike data [1] were used as the test bed due to better a priori knowledge of the spike signals. Application to both high and low signal-to-noise ratio (SNR) data showed that the proposed method outperforms conventional principal-component analysis (PCA) and FCM algorithm. FCM failed to cluster spikes for low-SNR data. For two discriminative performance indices based on Fisher's discriminant criterion, the proposed approach was over 1.36 times the ratio of between- and within-class variation of PCA for spike data with SNR ranging from 1.5 to 4.5 dB. In conclusion, the proposed scheme is unsupervised and can enhance the performance of fuzzy c-means clustering in spike sorting with low-SNR data.
Multi Qubit Transmission in Quantum Channels Using Fibre Optics Synchronously...researchinventy
A quantum channel can be used to transmit classical information as well as to deliver quantum data from one location to another . Classical information theory is a subset of Quantum information theory which is fundamentally richer, because quantum mechanics includes so many more elementary classes of static and dynamic resources. Quantum information theory contains many more facts other than described here, including the study of quantum data processing, manipulation and Quantum data compression. Here we consider quantum channel as Bosonic channels, which are a quantum-mechanical model for free space or fibre optic communication. In this paper the overview of theoretical scenario of quantum networks in particular to multiple user access to the quantum communication channel is considered. Multiple qubits are generated in different system, the proper alignment of qubits is a must it can be first come first serve or round robin fashion. The received data are grouped into codewords each of n qubits and quantum error correction is performed. These codewords are agreed between the transmitter and the receiver before transmitting over the quantum channel known as valid codewords.
A Novel Algorithm for Cooperative Spectrum Sensing in Cognitive Radio NetworksIJAEMSJORNAL
The rapid growth in wireless communication technology has led to a scarcity of spectrum. But, studies are saying that licensed spectrum is underutilized. Cognitive Radio Networks (CRNs) seem to be a promising solution to this problem by allowing unlicensed users to access the unused spectrum opportunistically. In this paper we proposed a novel spectrum sensing algorithm to improve the probabilities of detection and false alarm in a CRN, using the traditional techniques of energy and first order correlation detection. Results show a significant improvement in performance in cooperative spectrum sensing.
A PHYSICAL THEORY OF INFORMATION VS. A MATHEMATICAL THEORY OF COMMUNICATIONijistjournal
This article introduces a general notion of physical bit information that is compatible with the basics of quantum mechanics and incorporates the Shannon entropy as a special case. This notion of physical information leads to the Binary data matrix model (BDM), which predicts the basic results of quantum mechanics, general relativity, and black hole thermodynamics. The compatibility of the model with holographic, information conservation, and Landauer’s principle are investigated. After deriving the “Bit Information principle” as a consequence of BDM, the fundamental equations of Planck, De Broglie, Bekenstein, and mass-energy equivalence are derived.
A PHYSICAL THEORY OF INFORMATION VS. A MATHEMATICAL THEORY OF COMMUNICATIONijistjournal
This article introduces a general notion of physical bit information that is compatible with the basics of quantum mechanics and incorporates the Shannon entropy as a special case. This notion of physical information leads to the Binary data matrix model (BDM), which predicts the basic results of quantum mechanics, general relativity, and black hole thermodynamics. The compatibility of the model with holographic, information conservation, and Landauer’s principle are investigated. After deriving the “Bit Information principle” as a consequence of BDM, the fundamental equations of Planck, De Broglie, Bekenstein, and mass-energy equivalence are derived.
Identification of the Memory Process in the Irregularly Sampled Discrete Time...idescitation
This poster paper analyzes the memory process in the irregularly sampled daily solar radio flux signal between 1972-2013. The authors apply Savitzky-Golay filtering to denoise the signal, then use Finite Variance Scaling Method and Hurst exponent analysis to investigate the memory pattern. Their analysis finds the signal exhibits short memory behavior, suggesting it may have multi-periodic or pseudo-periodic characteristics. This provides insight into the internal dynamics and particle acceleration processes of the Sun.
Framework for Channel Attenuation Model Final PaperKhade Grant
This document describes a framework for validating wireless channel attenuation models for body sensor networks. The framework uses MATLAB to develop validation software that compares output from test attenuation models to reference signals using various signal analysis methods. The software was tested on sample data and returned validity values consistent with expectations, demonstrating it can effectively evaluate how well models predict channel attenuation effects. The validation framework will enable evaluation of future attenuation models to incorporate realistic attenuation simulations into body sensor network modeling.
DCE: A NOVEL DELAY CORRELATION MEASUREMENT FOR TOMOGRAPHY WITH PASSIVE REAL...ijdpsjournal
Tomography is important for network design and routing optimization. Prior approaches require either
precise time synchronization or complex cooperation. Furthermore, active tomography consumes explicit
probing resulting in limited scalability. To address the first issue we propose a novel Delay Correlation
Estimation methodology named DCE with no need of synchronization and special cooperation. For the
second issue we develop a passive realization mechanism merely using regular data flow without explicit
bandwidth consumption. Extensive simulations in OMNeT++ are made to evaluate its accuracy where we
show that DCE measurement is highly identical with the true value. Also from test result we find that
mechanism of passive realization is able to achieve both regular data transmission and purpose of
tomography with excellent robustness versus different background traffic and package size.
This summarizes a document about a filter-and-refine approach for reducing computational cost when performing correlation analysis on pairs of spatial time series datasets. It groups similar time series within each dataset into "cones" based on spatial autocorrelation. Cone-level correlation computation can then filter out many element pairs whose correlation is clearly below a threshold. The remaining pairs require individual correlation computation in the refinement phase. Experiments on Earth science datasets showed significant computational savings, especially with high correlation thresholds.
A Non Parametric Estimation Based Underwater Target ClassifierCSCJournals
Underwater noise sources constitute a prominent class of input signal in most underwater signal processing systems. The problem of identification of noise sources in the ocean is of great importance because of its numerous practical applications. In this paper, a methodology is presented for the detection and identification of underwater targets and noise sources based on non parametric indicators. The proposed system utilizes Cepstral coefficient analysis and the Kruskal-Wallis H statistic along with other statistical indicators like F-test statistic for the effective detection and classification of noise sources in the ocean. Simulation results for typical underwater noise data and the set of identified underwater targets are also presented in this paper.
This document summarizes a student project on blind source separation of audio signals. The student was able to recover three independent audio source signals from three mixtures with over 99.97% accuracy. Blind source separation is the separation of source signals from mixed signals without information about the sources or mixing process. It has applications in areas like speech processing. The student acknowledges help from advisors and friends. They provide background on blind source separation and describe their methodology, results, and references.
Adaptive Channel Equalization using Multilayer Perceptron Neural Networks wit...IOSRJVSP
This document presents a neural network approach to channel equalization using a multilayer perceptron with a variable learning rate parameter. Specifically, it proposes modifying the backpropagation algorithm to allow the learning rate to adapt at each iteration in order to achieve faster convergence. The equalizer structure is a decision feedback equalizer modeled as a neural network with an input, hidden and output layer. Simulation results show the proposed variable learning rate approach improves bit error rate and convergence speed compared to a standard backpropagation algorithm.
Similar to Sensing Method for Two-Target Detection in Time-Constrained Vector Poisson Channel (20)
Sri Guru Hargobind Ji - Bandi Chor Guru.pdfBalvir Singh
Sri Guru Hargobind Ji (19 June 1595 - 3 March 1644) is revered as the Sixth Nanak.
• On 25 May 1606 Guru Arjan nominated his son Sri Hargobind Ji as his successor. Shortly
afterwards, Guru Arjan was arrested, tortured and killed by order of the Mogul Emperor
Jahangir.
• Guru Hargobind's succession ceremony took place on 24 June 1606. He was barely
eleven years old when he became 6th Guru.
• As ordered by Guru Arjan Dev Ji, he put on two swords, one indicated his spiritual
authority (PIRI) and the other, his temporal authority (MIRI). He thus for the first time
initiated military tradition in the Sikh faith to resist religious persecution, protect
people’s freedom and independence to practice religion by choice. He transformed
Sikhs to be Saints and Soldier.
• He had a long tenure as Guru, lasting 37 years, 9 months and 3 days
Build the Next Generation of Apps with the Einstein 1 Platform.
Rejoignez Philippe Ozil pour une session de workshops qui vous guidera à travers les détails de la plateforme Einstein 1, l'importance des données pour la création d'applications d'intelligence artificielle et les différents outils et technologies que Salesforce propose pour vous apporter tous les bénéfices de l'IA.
Prediction of Electrical Energy Efficiency Using Information on Consumer's Ac...PriyankaKilaniya
Energy efficiency has been important since the latter part of the last century. The main object of this survey is to determine the energy efficiency knowledge among consumers. Two separate districts in Bangladesh are selected to conduct the survey on households and showrooms about the energy and seller also. The survey uses the data to find some regression equations from which it is easy to predict energy efficiency knowledge. The data is analyzed and calculated based on five important criteria. The initial target was to find some factors that help predict a person's energy efficiency knowledge. From the survey, it is found that the energy efficiency awareness among the people of our country is very low. Relationships between household energy use behaviors are estimated using a unique dataset of about 40 households and 20 showrooms in Bangladesh's Chapainawabganj and Bagerhat districts. Knowledge of energy consumption and energy efficiency technology options is found to be associated with household use of energy conservation practices. Household characteristics also influence household energy use behavior. Younger household cohorts are more likely to adopt energy-efficient technologies and energy conservation practices and place primary importance on energy saving for environmental reasons. Education also influences attitudes toward energy conservation in Bangladesh. Low-education households indicate they primarily save electricity for the environment while high-education households indicate they are motivated by environmental concerns.
Determination of Equivalent Circuit parameters and performance characteristic...pvpriya2
Includes the testing of induction motor to draw the circle diagram of induction motor with step wise procedure and calculation for the same. Also explains the working and application of Induction generator
Applications of artificial Intelligence in Mechanical Engineering.pdfAtif Razi
Historically, mechanical engineering has relied heavily on human expertise and empirical methods to solve complex problems. With the introduction of computer-aided design (CAD) and finite element analysis (FEA), the field took its first steps towards digitization. These tools allowed engineers to simulate and analyze mechanical systems with greater accuracy and efficiency. However, the sheer volume of data generated by modern engineering systems and the increasing complexity of these systems have necessitated more advanced analytical tools, paving the way for AI.
AI offers the capability to process vast amounts of data, identify patterns, and make predictions with a level of speed and accuracy unattainable by traditional methods. This has profound implications for mechanical engineering, enabling more efficient design processes, predictive maintenance strategies, and optimized manufacturing operations. AI-driven tools can learn from historical data, adapt to new information, and continuously improve their performance, making them invaluable in tackling the multifaceted challenges of modern mechanical engineering.
Height and depth gauge linear metrology.pdfq30122000
Height gauges may also be used to measure the height of an object by using the underside of the scriber as the datum. The datum may be permanently fixed or the height gauge may have provision to adjust the scale, this is done by sliding the scale vertically along the body of the height gauge by turning a fine feed screw at the top of the gauge; then with the scriber set to the same level as the base, the scale can be matched to it. This adjustment allows different scribers or probes to be used, as well as adjusting for any errors in a damaged or resharpened probe.
Sensing Method for Two-Target Detection in Time-Constrained Vector Poisson Channel
1. Sensing Method for Two-Target
Detection in Time-Constrained Vector
Poisson Channel
Muhammad Fahad, Member, IEEE
Department of Applied Computing
Michigan Technological University
Houghton, MI 49931, USA
mfahad@mtu.edu
Daniel R. Fuhrmann, Fellow, IEEE
Department of Applied Computing
Michigan Technological University
Houghton, MI 49931, USA
fuhrmann@mtu.edu
Abstract
It is an experimental design problem in which there are two Poisson sources with two
possible and known rates, and one counter. Through a switch, the counter can observe
the sources individually or the counts can be combined so that the counter observes the
sum of the two. The sensor scheduling problem is to determine an optimal proportion of
the available time to be allocated toward individual and joint sensing, under a total time
constraint. Two different metrics are used for optimization: mutual information between
the sources and the observed counts, and probability of detection for the associated
source detection problem. Our results, which are primarily computational, indicate
similar but not identical results under the two cost functions.
Index Terms
sensor scheduling, vector Poisson channels.
1. INTRODUCTION
The importance of sensor scheduling emerges when it is not possible to collect
unlimited amounts of data due to resource constraints. [1]. Generally speaking it is
always advantageous to collect as much data as possible, provided such data can be
properly managed and processed. However there are constraints imposed by available time,
computational resources, memory requirements, communication bandwidth, available
battery power, and sensor deployment patterns. This creates a need for an optimal or
heuristic scheduling methodology to extract meaningful data from different available
data sources in an efficient manner.
One of the major challenges in sensor scheduling problems involving switching among
various sources arises from the combinatorial nature of the possible switching possibilities.
This problem becomes further complicated if the sensor scheduling is done in continuous-
time settings [2].
Information-theoretic approaches have long been used in sensor scheduling problems,
especially for their invariance to invertible transformations of the data. Many of the early
approaches towards sensor management were information-theoretic, for instance [3], [4].
Mutual information between observed data and unknown variables under test provides
an intuitively appealing scalar metric for the performance of a given data acquisition
methodology. That being said, it is also well-known that maximizing mutual information
does not necessarily lead to optimal detection or estimation performance.
In this paper we address a very basic problem of scheduling a single sensor with multiple
switch configurations to detect a binary 2-long random vector in a vector Poisson channel.
Signal & Image Processing: An International Journal (SIPIJ) Vol.12, No.6, December 2021
1
DOI: 10.5121/sipij.2021.12601
2. Fig. 1: Illustration of sensing method for Bayesian detection of 2−long hidden random
vector X from 3−long observable random vector Y in vector Poisson channel under a
total observation time constraint, where {Pi} is a conditional Poisson point process given
Bernoulli distributed input Xi.
This might be of special interest in context of Poisson compressive sensing, where sparse
signal conditions can be exploited for reduced computational cost and efficient sensor
scheduling. We address this particular sensor scheduling problem from both information
theoretic and detection theoretic perspectives. As will be seen, this problem is deceptively
simple yet does not readily admit closed-form solutions.
The Poisson channel has been traditionally difficult to work with due to two inherent
obstacles attached with it. First, added Poisson noise and scaling of input does not
consolidate into a single parameter as SNR does in Gaussian channel. Second, scaling the
support of a Poisson random variable (integers) does not result in another Poisson random
variable. This is in contrast to Gaussian channels which are relatively well-studied from
both information-theoretic and estimation-theoretic aspects [5], [6], [7]. The Poisson
channel has the advantage in thought experiments such as ours of simple conceptual
models for data acquisition based on counting (photons, say) and switches that either
allow counts to either pass through or be discarded.
As will be seen when our problem is stated in more detail, fundamentally we are
exploring a trade-off between SNR and ambiguity that arises when phenomena from
multiple sources are combined in a single observation. Adding multiple Poisson processes
results in a Poisson process at a higher rate; higher rates correspond to higher SNR, and
generally higher SNR is seen as a good thing for detection and estimation performance.
However, this SNR comes at a cost: with a single detector looking at counts from multiple
processes, the observed Poisson counts cannot be disambiguated back to their original
sources. In our proposed scheme, part of the time is spent looking at sources individually
(to help with this disambiguation) and part of the time is spent observing the sources
jointly (to increase SNR). Finding the right balance between these two approaches is the
Signal & Image Processing: An International Journal (SIPIJ) Vol.12, No.6, December 2021
2
3. 7/27/2021 sensor_diagram.html
Switch
Source 1
Source 2
Optical Fiber Link
Optical Fiber Link
Photon Counter
Optical Fiber Link
Sensor Scheduling /
Optical Switch
0 1
1 0
1 1
Source
Selected
Source 2 Source 1
Photon Counter
Times
Counts Registered
Switch
Configurations
Vector Poisson Channel
Fig. 2: An abstract example: Y1, Y2 and Y3 are total counts of photons accumulated in
times T1, T2 and T3, respectively. LED source 1 (and source 2) initiates a homogeneous
Poisson process of intensity λ0 if Bernoulli distributed random input signal X1 (and X2)
takes value of 0 with probability (1 − p); and λ1 if input signal X1 (and X2) takes value
of 1 with probability p. Once the input random vector [X0 X1] assumes any of the four
possible states [X0 X1 : 00, 01, 10, 11], it doesn’t change its state during the course of
Photon counting for given time T.
heart of our problem.
In an unconstrained sensing-time setting, the vector Poisson channel has gained much
research activity and interesting results are shown paralleling to that of vector Gaussian
channel. One of the seminal works in linking the information theory and estimation theory
in the context of scalar Poisson channel is done by Guo et al. in [8], where the derivative of
mutual information with respect to signal intensity is equated with a function of conditional
expectation; providing a ground for the possible Poisson counterpart of a Gaussian
channel. This result for scalar Poisson channel was further refined by Atar and Weissman
in [9] where exact relationship between derivative of mutual information and minimum
mean loss error is given by providing the loss function l(x, x̂) = x log(x
x̂ ) − x + x̂ which
shares a key property with squared error loss i-e optimality of conditional expectation
with respect to mean loss criterion. This result together with d
dα I(X; P(αX))
4.
5.
6. α=0
=
E[X · log X] − E[X] · log(E[X]) given in [8] provides us an exact answer to the question:
for a given finite sensing time T and prior p, which of the two sensing mechanisms,
individual or joint sensing, is better than the other? (however, hybrid sensing still remains
elusive and this is we have investigated in this paper).
Wang et al. [10] unifies the vector Poisson and Gaussian channels by constructing
a generalization of the classical Bregman divergence and extended the scalar result to
vector case (unconstrained). They provide the gradient of mutual information with respect
to their input scaling matrices for both Poisson and Gaussian channel. But, for a general
vector Poisson channel existence of gradient of mutual information w.r.t scaling matrix
is defined in terms of expected value of the Bregman divergence matrix with a strictly
K-convex loss function and which requires the partial ordering interpretation [11] [10]
Signal & Image Processing: An International Journal (SIPIJ) Vol.12, No.6, December 2021
3
7. 0 0.2 0.4 0.6 0.8 1
0.17
0.175
0.18
0.185
0.19
0.784
0.785
0.786
0.787
0.788
0.789
0.79
Fig. 3: Mutual information I(X; Y ) and Bayesian probability of total correct detections
Pd vs. time T3 where 0 ≤ T3 ≤ T and T1 = T2 = T −T3
2 such that time constraint
T = T1 + T2 + T3 is satisfied.
and which doesn’t unify our problem. We are therefore also interested in exploring the
general concave nature of our problem w.r.t scaling matrix (if it exists), which is discussed
in coming sections.
The rest of the paper is organized as follows: Section 2 introduces the problem
description, explaining the Poisson vector channel in which the target is to be observed.
Section 3 provides the information theoretic model, while Section 4 describes the detection
theoretic model of the problem. Section 5 discusses the computed results from the previous
two sections. Finally, Section 6 concludes the paper. Notation: Upper case letters denote
random vectors. Realizations of the random vectors are denoted by lower case letters.
A number in subscript is used to show the component number of the random vector.
We use X1 and Y1 to represent scalar input and output random variables, respectively.
The superscript (·)|
denotes the matrix/vector transpose. Deterministic time variable Ti
is used to denote the sensing time for ith
conditional Poisson process and T is a given
finite time. α is an arbitrary positive scalar variable. Φ represents the scaling matrix. p is
the prior probability of 1 for a Bernoulli random variable. fX(x) denotes the probability
mass function of X, and the subscript will normally be dropped to simplify notation.
Poiss(y; λ) ≡ e−λ
λy
y! .
2. PROBLEM DESCRIPTION
Let X1 and X2 be two independent and identical distributed (i.i.d) Bernoulli random
variables with p being the probability of occurrence of 1. We may define probability mass
function f of discrete random vector X ≡ [X1, X2]|
as
fX(x) =
p2
x = [1 1]|
(1 − p)2
x = [0 0]|
p(1 − p) x = [0 1]|
or x = [1 0]|
(1)
Let {P1(t), t ≥ 0} and {P2(t), t ≥ 0}be two conditional point Poisson processes [12, pp.
88-89] such that their rate parameters, either λ0 or λ1 are determined by X1 and X2,
Signal & Image Processing: An International Journal (SIPIJ) Vol.12, No.6, December 2021
4
8. respectively. That is, λ(xi) = λ0 if xi = 0 and λ1 if xi = 1, for i = 1, 2. λ0 and λ1 are
assumed known. We may count the number of arrivals from the two conditional Poisson
arrival processes in three possible configurations: two by individually counting the arrivals
from the two processes P1, P2 and one by counting the arrivals from the sum of two
processes {P1(t) + P2(t), t ≥ 0} with given rate parameter λ(X1) + λ(X2) as illustrated
in Figs.(1) and (2). The counting is performed such that at any given time, only one
of the three possible configurations is active. Furthermore, because of the independent
increments property of conditional Poisson processes, it is not necessary to switch back and
forth among possible configurations; it is sufficient to be in configuration 1 for time T1,
followed by configuration 2 for time T2, and then configuration 3 for time T3. Additionally,
counting is performed with a finite time constraint T = T1 + T2 + T3 where T1, T2 and
T3 are the unknown time proportions in counting arrivals from processes {P1(t), t ≥ 0},
{P2(t), t ≥ 0} and {P1(t) + P2(t), t ≥ 0}, respectively and T is total time. After utilizing
available time T, the above counting paradigm leads to a multivariate Poisson mixture
model with four component in three dimensions; we write random vector Y |
≡ [Y1 Y2 Y3],
so that Y ∈ {0, 1, 2, 3 · · · }3
. Each Yi is Poisson random variable given X; such that their
conditional law is,
Y1|X1 ∼ Poiss
y1; λ0 · (1 − X1) + λ1 · X1
· T1
Y2|X2 ∼ Poiss
y2; λ0 · (1 − X2) + λ1 · X2
· T2
Y3|(X1 + X2) ∼ Poiss
y3; (2 − (X1 + X2)) · λ0+
(X1 + X2) · λ1
· T3
. (2)
The observed random vector Y carries information about hidden random vector X|
≡
[X1 X2]. We are interested in strategies for selecting the times T1, T2, T3 that optimize
some cost related to the inference problem, either mutual information or probability
of correct detection. Strategies that involve setting T3 to 0 are referred to as individual
sensing; any time spent looking at combined counts is termed joint sensing. The general
case, which involves some individual and some joint sensing, will be referred to as hybrid
sensing.
In matrix form we may relate the intensities of conditional Poisson distributed Yi’s as,
M: 3×1
z }| {
µ1
µ2
µ3
=
D: 3×3
z }| {
T1 0 0
0 T2 0
0 0 T3
B: 3×2
z }| {
1 0
0 1
1 1
Λ: 2×1
z }| {
Λ1
Λ2
=
Φ: 3×2
z }| {
T1 0
0 T2
T3 T3
Λ: 2×1
z }| {
Λ1
Λ2
=
λ0 · (1 − X1) + λ1 · X1
· T1
λ0 · (1 − X2) + λ1 · X2
· T2
(2 − (X1 + X2))λ0 + (X1 + X2)λ1
T3
(3)
where Λ1 = λ0 ·(1−X1)+λ1 ·X1 and Λ2 = λ0 ·(1−X2)+λ1 ·X2 are two scalar functions
of random variables X1 and X2, respectively. From the optimization point of view and
in terms of sensor scheduling, we are interested in finding the optimal time-allocation,
(T1, T2, T3), of total available time resource, T, that would maximize the reward i.e. either
the mutual information or probability of total correct detections. We may say that we are
Signal Image Processing: An International Journal (SIPIJ) Vol.12, No.6, December 2021
5
9. 0 0.2 0.4 0.6 0.8 1
0
0.2
0.4
0.6
0.8
1
0.5
0.6
0.7
0.8
0.9
1
Fig. 4: Scalar Poisson channel: mutual information I(X1; Y1) and probability of total
correct detections Pd vs. time T.
interested in finding the diagonal matrix D, such that the mutual information I(X; Y ) is
maximized under time constraint T = T1 + T2 + T3. Configuration matrix B represents
all possible sensing combinations and Λ is the Poisson rate parameter vector. Vector M
is the vector of intensities of the Poisson random variables Y1, Y2, and Y3.
Mathematically we may write
max
T1,T2,T3
I(X1, X2; Y1, Y2, Y3) s.t. T1 + T2 + T3 = T. (4)
We may rewrite the objective function
max
T1,T2,T3
I(X1, X2; Y1, Y2, Y3) s.t. T1 + T + T3 = 1 (5)
where 0 ≤ T1 ≤ 1, 0 ≤ T2 ≤ 1 and 0 ≤ T3 ≤ 1. Without loss of generality we can take
T = 1 since the means of the observed Yi variables are the product of rate and time,
and any change in the total available time can be reflected in the rates, or equivalently
changing the time units.
We extend our understanding by looking at the same problem from the detection
theoretic aspect. For this, we maximize the Bayesian probability of total correct detection,
Pd, of hidden random vector X from observable random vector Y , as
max
T1,T2,T3
Pd s.t. T1 + T2 + T3 = 1 (6)
and compare the results to formerly computed information theoretic results. Simulations
on empirical data are performed by varying different parameters involved in the model
for further validation of computed results.
3. INFORMATION THEORETIC DESCRIPTION
A. Scalar Poisson channel
Firstly the scalar version of the Poisson channel is presented and then it is extended to
the vector version of our problem. We start with mutual information between a scalar
Signal Image Processing: An International Journal (SIPIJ) Vol.12, No.6, December 2021
6
10. Bernoulli random variable X1 and Y1 which is a scalar and two component Poisson
mixture. The probability mass function of Y1 is then given as
f(y1) = (1 − p) · Poiss(y1; Tλ0) + p · Poiss(y1; Tλ1).
The mutual information I can be written as
I(X1; Y1) = H(Y1) − H(Y1|X1)
where H(·) is the Shannon entropy, which is defined as a discrete functional H : f →
−
P
Y ∈Y
f · log2(f) and f is the probability mass function of random variate Y with Y as
the corresponding support. We may write H(Y1) as
H(Y1) = −
∞
X
y1=0
[(1 − p) · Poiss(y1; Tλ0) + p
· Poiss(y1; Tλ1)] · log2[(1 − p) · Poiss(y1; Tλ0)
+ p · Poiss(y1; Tλ1)]
,
and
H(Y1|X1) =
X
x1∈X1
f(x1)
∞
X
y1=0
−f(y1|x1) · log2[f(y1|x1)]
= −
(1 − p)
∞
X
y1=0
Poiss(y1; Tλ0)
· log2[Poiss(y1; Tλ0)] + p
∞
X
y1=0
Poiss(y1; Tλ1)
· log2[Poiss(y1; Tλ1)]
.
In Fig. (4), for a scalar Poisson channel discussed above, I(X1; Y1) and Pd illustrates a
monotonic relationship w.r.t T, as expected.
B. Vector Poisson channel
Mutual information between two random vectors can be defined as the difference
between the total entropy in one random vector and the conditional entropy in the second
random vector given the first vector. We write
I(X; Y ) = H(Y ) − H(Y |X) (7)
The conditional entropy H(Y |X) is calculated from the conditional probability mass
functions f(y|X = [x1 x2]|
) defined as
f(y|X = [0 0]|
)
= Poiss(y1; λ0T1) · Poiss(y2; λ0T2) · Poiss(y3; 2λ0T3),
f(y|X = [0 1]|
)
= Poiss(y1; λ0T1) · Poiss(y2; λ1T2) · Poiss(y3; (λ0 + λ1)T3),
f(y|X = [1 0]|
)
= Poiss(y1; λ1T1) · Poiss(y2; λ0T2) · Poiss(y3; (λ1 + λ0)T3),
f(y|X = [1 1]|
)
= Poiss(y1; λ1T1) · Poiss(y2; λ1T2) · Poiss(y3; 2λ1T3).
Signal Image Processing: An International Journal (SIPIJ) Vol.12, No.6, December 2021
7
11. The marginal probability mass function of Y is then given as
f(y)
= (1 − p)2
· f(y|X = [0 0]|
) + p(1 − p)·
f(y|X = [0 1]|
) + p(1 − p) · f(y|X = [1 0]|
)+
p2
· f(y|X = [1 1]|
). (8)
Using the identity defined in (7) and the definition of entropy defined above, the mutual
information I(X; Y ) becomes
I(X; Y )
=
∞
X
y1=0
∞
X
y2=0
∞
X
y3=0
−f(y1, y2, y3) · log2[f(y1, y2, y3)])
−
X
X∈X
f(x)
h ∞
X
y1=0
∞
X
y2=0
∞
X
y3=0
−f(y1, y2, y3|X)
· log2[f(y1, y2, y3|X)]
i
. (9)
A complete expression for mutual information is given in Appendix A.
Fig. (3) illustrates the concavity of mutual information as a function of T3 (constrained
problem), in a vector Poisson channel defined in (2) and illustrated in Fig. (1), when
three sensing times are varied according to (T1, T2, T3) = (1−T3
2 , 1−T3
2 , T3) and 0 ≤ T3 ≤ 1.
It can also be seen that the two metrics Pd and I are maximizing at two different input
argument values.
Theorem 1: I(X1, X2; Y1, Y2, Y3) is concave in T3 = 0 plane.
Proof:
I(X1, X2; Y1, Y2, Y3)
12.
13.
14. T3=0
= I(X1, X2; Y1, Y2)
By chain rule of mutual information:
I(X1, X2; Y1, Y2) = I(X1, X2; Y1) + I(X1, X2; Y2|Y1)
=I(X1; Y1) + I(X2; Y2). (10)
We note in (10) that each I(Xi; Yi) is solely a function of Ti and also concave in it [9,
p. 1306][13]. We further know that sum of concave functions is a concave function.
Therefore I(X1, X2; Y1, Y2) is concave in T1 and T2 when T3 = 0. This concludes the
proof.
Theorem 2: I(X1, X2; Y1, Y2, Y3) is symmetric in variables T1 and T2.
Proof: Mutual information I(X1, X2; Y1, Y2, Y3) given in appendix A is invariant under
any permutation of variables T1 and T2. That means interchanging the two variables
leaves the expression unchanged.
If we further expand the expression for mutual information between X and Y using
chain rule [14] [15] as,
I(X1, X2; Y1, Y2, Y3)
= I(X1, X2; Y1) + I(X1, X2; Y2|Y1) + I(X1, X2; Y3|Y1, Y2)
= I(X1; Y1) +
0
z }| {
I(X2; Y1|X1) +
0
z }| {
I(X1; Y2|Y1) +
I(X2;Y2)
z }| {
I(X2; Y2|Y1, X1) +I(X1, X2; Y3|Y1, Y2)
= I(X1; Y1) + I(X2; Y2) + I(X1, X2; Y3|Y1, Y2). (11)
Signal Image Processing: An International Journal (SIPIJ) Vol.12, No.6, December 2021
8
15. 0 0.2 0.4 0.6 0.8 1
0
0.2
0.4
0.6
0.8
Fig. 5: Concavity and convexity of three terms in I(X1X2; Y1Y2Y3) = I(X1; Y1) +
I(X2; Y2) + I(X1X2; Y3|Y1Y2) when (T1, T2, T3) := (1−T3
2 , 1−T3
2 , T3) such that 0 ≤ T3 ≤ 1.
Note that in (11) that the first and second terms are indeed concave in (T1, T2, T3).
The third term, when computed, exhibits non-concavity in the respective domain. Fig.
(5) illustrates this when mutual information is plotted along the line (T1, T2, T3) :=
(T −T3
2 , T −T3
2 , T3) parametrized by 0 ≤ T3 ≤ T. It is interesting to note that the sum
of three terms exhibits concavity, irrespective of the fact that the third term is non-
concave. However, analytical investigation of the functional properties of this third term,
I(X1X2; Y3|Y1Y2), is not done in this work.
We computed mutual information between hidden random vector X and observable
random vector Y for diverse set of intensities λ0 and λ1 along with different priors and
total available time, T = 1, to observe the concavity of I(X1, X2; Y1, Y2, Y3) in T1, T2 and
T3. While we do not have a proof that the objective function is concave in a linear time
constraint for the I metric, we have carried out extensive computational experiments
and we have yet to observe a single case in which non-concavity is observed. Based on
our observation we consistently noted the concave property of mutual information which
led us to propose the following conjecture.
Conjecture 1: If X ≡ [X1 X2]|
be a non-negative random vector where X1 and X2 are
mutually independent and identical distributed. Let random vector Y ≡ [Y1 Y2 Y3]|
∈
Z3
+ , jointly distributed with X such that conditional law is given as in (2) then
I(X1, X2; Y1, Y2, Y3) is concave in (T1, T2, T3) under time constraint T = T1 + T2 + T3.
Corollary 1 (Feasible region for optimal solution): The two properties of mutual infor-
mation concavity (if true) and symmetry (proved), guarantee that maxima must occur at
the line of symmetry (T1, T2, T3) := (T −α
2 , T −α
2 , α) where 0 ≤ α ≤ T.
One of the immediate consequences of exploiting symmetry and concavity of the problem
is that it would reduce the search space for the optimal solution from two dimensions to
one dimension.
Claim : Mutual information I(X1, X2; Y1, Y2, Y3) in general is not symmetric in p around
p = 0.5 for fixed time proportions (T1, T2, T3) for a conditionally multivariate Poisson
(Y1, Y2, Y3) given (X1, X2).
Signal Image Processing: An International Journal (SIPIJ) Vol.12, No.6, December 2021
9
16. Example: For a scalar random variable X1 and conditional Poisson variable Y1 with
scaling factor T, we have equation (115) on page (10) of [8], given below
d
dT
I(X1; Y1)
17.
18.
19. T =0
= E[Λ1 · log Λ1] − E[Λ1] · log(E[Λ1]).
(12)
where Λ1 = λ0 ·(1−X1)+λ1 ·X1. When we consider a random vector X (instead of scalar
random variable as above), and take distribution on X as given in (1). Dividing variable
total time T equally in counting arrivals from X1 and X2 separately: T1 = T2 = T
2 and
T3 = 0, we have
I(X1, X2; Y1, Y2, Y3)
41. T =0
!
= −
(λ0 − λ1)2
Ln(2) · (1 − p)λ0 + p · λ1
0 (15)
Maxima of equation (14) occurs at p = 4−e
e when λ0 = 2 and λ1 = 4. Since (14) is
concave in p, then if it was symmetric, maxima must had occurred at p = 0.5.
4. DETECTION THEORETIC DESCRIPTION
A. Maximization of probability of total correct detections in sensing time intervals
In previous section, we discussed and presented the metric of mutual information
I between hidden random vector X, and observable vector Y . Here we approach the
original sensing problem as a multi-hypothesis detection problem and find the optimal
solution by minimizing the Bayesian risk [16, pp.220] in sensing times i-e (T1, T2, T3).
We define Bayes risk r as
r = (1 − p)2
h
P00 | 00 C00 | 00 + P01 | 00 C01 | 00
+P10 | 00 C10 | 00 + P11 | 00 C11 | 00
i
+ p(1 − p)
h
P00 | 01 C00 | 01 + P01 | 01 C01 | 01 + P10 | 01 C10 | 01
+P11 | 01 C11 | 01
i
+ p(1 − p)
h
P00 | 10 C00 | 10
+P01 | 10 C01 | 10 + P10 | 10 C10 | 10 + P11 | 10 C11 | 10
i
+p2
h
P00 | 11 C00 | 11 + P01 | 11 C01 | 11 + P10 | 11 C10 | 11
+P11 | 11 C11 | 11
i
,
Signal Image Processing: An International Journal (SIPIJ) Vol.12, No.6, December 2021
10
42. where Pkl | ij is the probability that X = [xi
1 xj
2]|
is true while decision X = [xk
1 xl
2]|
is made; similarly for Ckl | ij. Setting all costs for which [xi
1 xj
2]|
6= [xk
1 xl
2]|
to one and
[xi
1 xj
2]|
= [xk
1 xl
2]|
to zero, we have
r = (1 − p)2
h
P01 | 00 + P10 | 00 + P11 | 00
i
+ p(1 − p)
h
P00 | 01 + P10 | 01 + P11 | 01
i
+ p(1 − p)
h
P00 | 10 + P01 | 10 + P11 | 10
i
+ p2
h
P00 | 11 + P01 | 11 + P10 | 11
i
. (16)
We are interested in minimizing this Bayes risk r in (T1, T2, T3) i-e
min
T1,T2,T3
r s.t. T1 + T2 + T3 = 1. (17)
Note that while minimizing r in (T1, T2, T3), the decisions boundaries would be changing
accordingly and become function of (T1, T2, T3). Equivalently, we may say that
max
T1,T2,T3
Pd s.t. T1 + T2 + T3 = 1 (18)
where Pd is probability of total correct detections, Pd = 1 − r. In the next section
we compute I ans Pd, for different channel parameters, given in Appendices A and B,
respectively.
5. COMPUTATIONAL AND SIMULATION RESULTS
The primary purpose of performing the computations and simulations is twofold: the
first is to see under what circumstances which of the sensing method: individual, joint or
hybrid sensing is optimal since analytical solutions for the two objective functions do
not exist; and the second is to investigate whether the two metrics I and Pd leads to
the same or different maximizing input arguments. The objective function, defined in
(4) is observed to be a convex optimization problem when a pre-defined channel scaling
matrix structure is imposed. For all computational purposes, we always approximate the
Poisson mass function by truncating it; truncation is done by a rectangular window that
extends from lower limit 0 to the upper limit where Poisson pmf drops in value below
double precision machine epsilon ( = 2−53
) as given in Appendix C.
Fig. (6) illustrates how the optimal input argument T3 moves from 0 to 1 as prior
0 p 1 is varied while input intensities are held fixed. Ninety linearly spaced discrete
points in the range (0, 0.5] are first taken for prior p and then another ten linearly spaced
points are taken in the interval (0.5, 1), constituting a total of hundred points of p for
computations. For every value of prior p; another hundred linearly spaced points for
time proportion T3 are taken in the closed interval [0, 1] such that T1 = T2 = 1−T3
2 and
T1 + T2 + T3 = 1. At every instant of p, hundred values of I are computed corresponding
to hundred time proportion. Maximum value of I is then sought and the corresponding
value of T3 is plotted against prior p. The plot signifies that if the prior p is closer to 0
then all time T be invested in joint sensing whereas as prior increases from 0 towards 1,
individual sensing progressively starts to take more and more proportion of total available
time T. This phenomena is further explored in scatter plots where input intensities are
varied too.
In Fig. (11), ternary plots of I are shown as prior p is varied. The vertices of the
equilateral triangle are at (T1, 0, 0), (0, T2, 0) and (0, 0, T3) with T1 + T2 + T3 = 1. It
can be seen that the distribution of time resource T = 1 changes as p is varied. As p
tends to 0, the optimal value of T3 tends to consume all the available time resource T
Signal Image Processing: An International Journal (SIPIJ) Vol.12, No.6, December 2021
11
43. 0 0.2 0.4 0.6 0.8 1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Fig. 6: Optimal time T0
3 vs. prior probability p with T1 = T2 and T1 + T2 + T3 = T,
considering I as a metric.
leaving optimal values of T1 and T2 approaching to 0. Mirror symmetry in arguments
T1 and T2 can be observed. It can be seen further that the optimal argument always
lies on the line that bisects the equilateral triangle into two right angle triangles i.e.
(T1, T2, T3) := (1−T3
2 , 1−T3
2 , T3) where 0 ≤ T3 ≤ 1.
A relationship between mutual information and MAP (maximum a posteriori) detection
for our problem is investigated in Fig. (12), in which Bayesian probability of total correct
detections, Pd, given in Appendix B is plotted against respective mutual information
I given in Appendix A. The I and Pd curves are not optimizing on the same input
arguments. We further counter-checked our MAP detector performance with empirical
means as shown in Fig. (13) by generating 105
samples from a multivariate Poisson
mixture random vector Y , at each time instant (T1, T2, T3) for fixed given parameters:
p, λ0, λ1. Total of 200 linearly spaced time instants of T3 are taken in the range [0, 1]
and assuming T1 = T2 = 1−T3
2 . The samples are taken in the same proportion (of 105
)
from each of the four components of mixture Y as their respective prior probabilities
suggest. Each of the samples is then passed through a MAP detector to detect which
of the four mixture component is the source of that sample. These 105
detections are
then compared with their true sources to compute Cd at each of time instant T3. We
found that this empirical total correct detections Cd is consistent with analytic Pd. The
computing method for I is described step-by-step in the flowcharts given in Fig.(9) and
Fig.(10). The bounds on the entropy of Poisson variable are available in [17], we use
them to validate that the computed Poisson entropies are within bounds.
For an unconstrained problem of choosing individual sensing over jointing sensing
for a given prior and fixed intensities for an infinitesimal or finite T; the first derivative
of I at T = 0 given in [8] (along with knowing the fact that I is concave in T [9])
might be helpful. It is also helpful in verifying some of the numerical results of I with
analytical ones. In Fig.(7) we have d
dT I(X1, X2; Y3)
49. T =0
vs.
p for λ0 = 2 and λ1 = 100. We can see that at p = 0.5 the first derivatives of both the
Signal Image Processing: An International Journal (SIPIJ) Vol.12, No.6, December 2021
12
56. T =0
with (T1 = T
2 , T2 = T
2 ) vs. p.
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
0
0.02
0.04
0.06
0.08
0.1
0.12
0.14
Fig. 8: Joint sensing and individual sensing vs. T. Note that joint sensing is better sensing
method when T 2.25. For T 2.25 individual sensing is better.
joint and individual sensing w.r.t time variable T are same; this means that no matter
what the given time is, individual sensing is never worse than the joint sensing at p = 0.5.
Whereas, for p 0.5 we need to further know what the given finite time T is, to decide
which of the two methods is better over the other for that given time T. As can be
seen in Fig.(8) that the two I curves cross each other, making one better over the other
in two different time segments. Further note that in Fig. (8) the numerical derivative
values of I at T = 0 are 0.055011541474291 and 0.061490803759598 for individual and
Signal Image Processing: An International Journal (SIPIJ) Vol.12, No.6, December 2021
13
57. joint sensing, respectively when I is computed according to the algorithm defined in
flowcharts. Now compare these numerical derivative values with analytically calculated
ones 0.055011540931595 and 0.061490807291534; the two are accurate to eight decimal
places, this validates that computations are accurate enough.
Fig. (14) and Fig. (15) illustrates the scatter plots, for a range of varying input intensities
in constrained problem w.r.t I metric. The scatter plots on the left hand side illustrates the
optimal value of mutual information at any point (λ0 · T, λ1 · T) where λ0 · T λ1 · T and
0 λ1 · T ≤ 5 for a given prior p and T = 1. The corresponding maximizing argument
is shown on the left hand side of the scatter plots. It is noticed that for prior p 0.5,
farther the four components of multivariate mixture Poisson are from each other in terms
of their intensities; more the optimal solution moves towards the individual sensing. In
other words if the four pmfs (under hypothesis testing) gets closer, more the optimal
sensing relies on joint sensing for prior p 0.5. If the prior p ≥ 0.5 then irrespective of
the other parameter values in the model, individual sensing is the optimal strategy from
I metric.
6. CONCLUSION
In this work we have addressed a simple abstract sensor scheduling problem in which
two Poisson sources are observed either individually or jointly with a switching mechanism
and a single counter. The scheduling problem is to choose the times associated with the
different switch configurations given a total time constraint.
For our specific problem we were able to solve the optimization problem using compu-
tational methods, primarily because of the relatively small number of free parameters. For
the case in which mutual information is the cost function, we observed but were unable
to prove that the cost function is concave in the switch times; if true this reduces the
optimization problem to a one-dimensional search. We observed that the two different
metrics of mutual information and probability of correct detection did not always lead to
the same scheduling solution. A second interesting result was that, under the mutual
information metric, when the prior p on the sources was greater than 0.5, individual
sensing was always the best that could be done.
In a certain sense our computational results are not particularly interesting. In most cases
the cost functions are quite flat and there were no dramatic variation in mutual information
or probability of correct detection across the space of possible switch configuration times.
A communications engineer faced with the problem exactly as we have posed it would be
hard-pressed to justify doing anything other than the simplest and most obvious solution,
namely taking the available time and dividing it equally between individual observation
of the two sources.
However, the fact that there is some variation in performance based in intelligent choice
of the switch times leaves open the possibility of the value of investigations like ours for
more complex problems. Several extensions immediately come to mind: more sources
(perhaps hundreds or thousands, not two), unknown Poisson rates, different unknown
or nonexistent priors, different source distributions. Unfortunately, the jump to a larger
number of sources brings with it a combinatorial explosion which would rule out the
brute-force optimization techniques used in this paper in very short order. Alternate
switching strategies might include an adaptive one in which the switching time and
configurations are determined online or in real time depending on the observations up
to the current time. We recommend and are interested in continuing investigations along
these lines.
Signal Image Processing: An International Journal (SIPIJ) Vol.12, No.6, December 2021
14
59. APPENDIX B
EXPRESSION OF PROBABILITY OF TOTAL CORRECT DETECTIONS Pd
Pd =
X
(y1,y2,y3)
f00 : {(f00 = f01) (f00 = f10) (f00 = f11)}+
X
(y1,y2,y3)
f01 : {(f01 f00) (f01 = f10) (f01 = f11))}+
X
(y1,y2,y3)
f10 : {(f10 f00) (f10 f01) (f10 = f11)}+
X
(y1,y2,y3)
f11 : {(f11 f00) (f11 f01) (f11 f10)}.
Where
f00 ≡ (1 − p)2
· Poiss(y1; λ0T1) · Poiss(y2; λ0T2) · Poiss(y3; 2λ0T3),
f01 ≡ p(1 − p) · Poiss(y1; λ0T1) · Poiss(y2; λ1T2) · Poiss(y3; (λ0 + λ1)T3),
f10 ≡ p(1 − p) · Poiss(y1; λ1T1) · Poiss(y2; λ0T2) · Poiss(y3; (λ1 + λ0)T3),
f11 ≡ p2
· Poiss(y1; λ1T1) · Poiss(y2; λ1T2) · Poiss(y3; 2λ1T3),
: logical AND operation
APPENDIX C
NUMERICAL APPROXIMATION OF THE POISSON PMF AND MUTUAL INFORMATION
Poiss(x; λ) ≈
eλ
λx
x! x ≤ xc
0 x xc
(19)
where xc is such that cummulative distribution function of Poisson random variable at
xc is approximately equal to 1. i.e. CDFPoiss(xc; λ) ≈ 1. When represented in double
precision floating-point arithmetic in IEEE 754 standard; this means that numerical values
of CDFPoiss(xc; λ) and CDFPoiss(xc + 1; λ) are identical and equal to numerical one.
The infinite summations in mutual information expression given in appendix A are all
truncated from 0 to xc = CDFPoiss−1
(1 − ; (λ1 + λ1)T) for a given value of λ1 and T.
CDFPoiss−1
is inverse Poisson cdf and is, the machine precision number, 2−53
[18].
APPENDIX D
CODE DESCRIPTION
Since computations for I can be done in parallel, we may exploit this fact by vector-
ization of the algorithm and then evaulating I at grid points in parallel as defined in
flowchart in Fig.(9) and Fig.(10) for fast computational throughput. However, it must
be noted that this way of vectorized computations by first performing the truncation
of Poisson distribution and then forming the grid of points, for further calculation of
I, is not effective for large values of Poisson intensities involved in the problem. This
is because the grid size increases cubely w.r.t truncated support of involved pmfs. So,
for large values of Poisson intensities, say λ 100, we need to resort to Monte Carlo
method.
For calculation of Pd, the same computed conditional Poisson pmfs can be used for all
the comparisons that are required in conditional expression given in Appendix B.
Signal Image Processing: An International Journal (SIPIJ) Vol.12, No.6, December 2021
16
60. Start
input variables
, , T
λ1 λ0
i D?
take D no. of points from convex set
{ + + = T :
T1 T2 T3
= , 0 ≤ ≤ , 0 ≤ ≤ T }
T1 T2 T1
T
2
T3
choose N no. of points from set
{p : 0 p 1}
collect (i, j value of
)
th
I ( ; )
X1X2 Y1Y2Y3
i := 1
j N ?
i := i + 1
True
False
End
True
False
j := j + 1
j := 1
call subroutine MI
to compute I( ; )
X1X2 Y1Y2Y3
at
p(j) and ( (i), (i), (i))
T1 T2 T3
display:
plot of I ( ; )
X1X2 Y1Y2Y3
vs (p, )
T3
Fig. 9: Flowchart 1: Algorithm description for computation of I.
Signal Image Processing: An International Journal (SIPIJ) Vol.12, No.6, December 2021
17
61. subroutine MI
inputs: , , p(j)
λ1 λ0
(i), (i), (i)
T1 T2 T3
calculate truncation limits:
:= (1 − , (i))
y1max
Poisson CDF
−1
2
−53
λ1 T1
:= (1 − , (i))
y2max
Poisson CDF
−1
2
−53
λ1 T2
:= (1 − , 2 (i))
y3max
Poisson CDF
−1
2
−53
λ1 T3
form 3-fold Cartesian product :
× × = {( , , )
Y1 Y2 Y3 y1 y2 y3
∣
∣
∈ {0, 1, ⋯ },
y1 y1max
∈ {0, 1, ⋯ }, ∈ {0, 1, ⋯ }}
y2 y2max
y3 y3max
compute conditional probabilities
at each ( , , ) :
y1 y2 y3
u1
u2
u3
u4
:= P ( , (i))P ( , (i))P ( , 2 (i))
y1 λ0T1 y2 λ0T2 y3 λ0T3
:= P ( , (i))P ( , (i))P ( , ( + ) (i))
y1 λ0T1 y2 λ1T2 y3 λ0 λ1 T3
:= P ( , (i))P ( , (i))P ( , ( + ) (i))
y1 λ1T1 y2 λ0T2 y3 λ1 λ0 T3
:= P ( , (i))P ( , (i))P ( , 2 (i))
y1 λ1T1 y2 λ1T2 y3 λ1T3
where: P (y, λ) = Poisson PDF(y, λ)
compute mixture probability at each ( , , ) :
y1 y2 y3
P ( , , ) :=
y1 y2 y3
(j)u1 + p(j)(1 − p(j))u2
p
2
+ p(j)(1 − p(j))u3 + (1 − p(j) u4
)
2
compute mixture entropy:
H ( , , ) :=
Y1 Y2 Y3
− P ( , , ) P ( , , )
∑
=0
y1
y1
max
∑
=0
y2
y2
max
∑
=0
y3
y3
max
y1 y2 y3 Log2 y1 y2 y3
compute conditional entropy:
H ( , , | , ) :=
Y1 Y2 Y3 X1 X2
− p(j u1 (u1)
)
2
∑
=0
y1
y1
max
∑
=0
y2
y2
max
∑
=0
y3
y3
max
Log2
− p(j)(1 − p(j)) u2 (u1)
∑
=0
y1
y1
max
∑
=0
y2
y2
max
∑
=0
y3
y3
max
Log2
− p(j)(1 − p(j)) u3 (u3)
∑
=0
y1
y1
max
∑
=0
y2
y2
max
∑
=0
y3
y3
max
Log2
− (1 − p(j) u4 (u4)
)
2
∑
=0
y1
y1
max
∑
=0
y2
y2
max
∑
=0
y3
y3
max
Log2
I( , , ; , ) :=
Y1 Y2 Y3 X1 X2
H( , , ) − H( , , | , )
Y1 Y2 Y3 Y1 Y2 Y3 X1 X2
validation:
if I 0
or
I min (H( , ), H( , , ))
X1 X2 Y1 Y2 Y3
A
A
return I
False
True
terminate with
error
8/4/2021 subroutine_flowchart.xml
1/1
Fig. 10: Flowchart 2: subroutine for compu-
tation of I.
Signal Image Processing: An International Journal (SIPIJ) Vol.12, No.6, December 2021
18
62. 0
0 0
0.2
0.4
0.6
0.5 0.5
0.8
1
1 1
5.6
5.7
5.8
5.9
6
6.1
6.2
10-5
(a)
0
0 0
0.2
0.4
0.6
0.5 0.5
0.8
1
1 1
0.045
0.046
0.047
0.048
0.049
0.05
0.051
0.052
0.053
0.054
(b)
0
0 0
0.2
0.4
0.6
0.5 0.5
0.8
1
1 1
0.31
0.32
0.33
0.34
0.35
0.36
0.37
0.38
(c)
0
0 0
0.2
0.4
0.6
0.5 0.5
0.8
1
1 1
0.68
0.7
0.72
0.74
0.76
0.78
0.8
0.82
0.84
0.86
(d)
0
0 0
0.2
0.4
0.6
0.5 0.5
0.8
1
1 1
0.54
0.56
0.58
0.6
0.62
0.64
0.66
(e)
0
0 0
0.2
0.4
0.6
0.5 0.5
0.8
1
1 1
4
4.05
4.1
4.15
4.2
4.25
4.3
4.35
4.4
10-5
(f)
Fig. 11: I(X; Y ) vs. (T1, T2, T3) under time constraint T1 + T2 + T3 = 1 for λ0 = 10,
λ1 = 20, T = 1 and varying prior probability p. It can be seen from (a)-(f) that as
prior p varies from 0.00001 to 0.5, the optimal solution drifts from (0, 0, 1) to (0.5, 0.5, 0)
and stays there as p is varied further from 0.5 to 0.99999 along the line of symmetry
(T1, T2, T3) := (1−α
2 , 1−α
2 , α) where 0 ≤ α ≤ 1.
Signal Image Processing: An International Journal (SIPIJ) Vol.12, No.6, December 2021
19
63. 0 0.2 0.4 0.6 0.8 1
1.2
1.25
1.3
1.35
1.4
1.45
1.5
0.82
0.84
0.86
0.88
0.9
0.92
0.94
0.96
0.98
(a)
0 0.2 0.4 0.6 0.8 1
1.4
1.5
1.6
1.7
1.8
0.75
0.8
0.85
0.9
0.95
(b)
0 0.2 0.4 0.6 0.8 1
1.1
1.15
1.2
1.25
1.3
1.35
1.4
1.45
1.5
0.8
0.85
0.9
0.95
(c)
0 0.2 0.4 0.6 0.8 1
0.105
0.11
0.115
0.12
0.125
0.13
0.135
0.985
0.986
0.987
0.988
0.989
0.99
0.991
0.992
0.993
(d)
Fig. 12: Mutual information I(X; Y ) and probability of total correct detections Pd vs. T3
for prior probabilities of 0.25, 0.5, 0.75 and 0.99.
Signal Image Processing: An International Journal (SIPIJ) Vol.12, No.6, December 2021
20
64. 0 0.2 0.4 0.6 0.8 1
0
0.2
0.4
0.6
0.8
1
(a)
0 0.2 0.4 0.6 0.8 1
0
0.2
0.4
0.6
0.8
1
(b)
0 0.2 0.4 0.6 0.8 1
0
0.2
0.4
0.6
0.8
1
(c)
0 0.2 0.4 0.6 0.8 1
0
0.2
0.4
0.6
0.8
1
(d)
Fig. 13: Empirical Correct-decision rate Cd and analytical probability of total correct
detections Pd vs. T3 for prior probabilities of 0.25, 0.5, 0.75 and 0.99.
Signal Image Processing: An International Journal (SIPIJ) Vol.12, No.6, December 2021
21
65. 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
0
0.1
0.2
0.3
0.4
0.5
0.6
(a)
0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
(b)
0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
(c)
0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
(d)
0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
0
0.2
0.4
0.6
0.8
1
1.2
(e)
0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
(f)
Fig. 14: Left: IO
(X; Y ) vs. (λ0T, λ1T) in the region λ1T λ0T, right: corresponding
optimal argument parameter αO
vs. (λ0T, λ1T) for varying prior probabilities p. The
search for each optimal argument αO
for any fixed: (λ0T, λ1T) and p is performed over
the line (T1, T2, T3) := (1−α
2 , 1−α
2 , α) where 0 ≤ α ≤ 1 and T1 + T2 + T3 = 1.
Signal Image Processing: An International Journal (SIPIJ) Vol.12, No.6, December 2021
22
66. 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
0
0.5
1
1.5
(a)
0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
(b)
0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
(c)
0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
(d)
0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
0
1
2
3
4
5
6
7
10-3
(e)
0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
(f)
Fig. 15: Left: IO
(X; Y ) vs. (λ0T, λ1T) in the region λ1T λ0T, right: corresponding
optimal argument parameter αO
vs. (λ0T, λ1T) for varying prior probabilities p. The
search for each optimal argument αO
for any fixed: (λ0T, λ1T) and p is performed over
the line (T1, T2, T3) := (1−α
2 , 1−α
2 , α) where 0 ≤ α ≤ 1 and T1 + T2 + T3 = 1.
Signal Image Processing: An International Journal (SIPIJ) Vol.12, No.6, December 2021
23
67. REFERENCES
[1] A. O. Hero, D. Castañón, D. Cochran, and K. Kastella, Foundations and Applications of Sensor Management.
Springer Science Business Media, 2007.
[2] H. Lee, K. L. Teo, and A. E. Lim, “Sensor scheduling in continuous time,” Automatica, vol. 37, no. 12,
pp. 2017–2023, 2001.
[3] J. M. Manyika and H. F. Durrant-Whyte, “Information-theoretic approach to management in decentralized
data fusion,” in Applications in Optical Science and Engineering. International Society for Optics and
Photonics, 1992, pp. 202–213.
[4] W. W. Schmaedeke, “Information-based sensor management,” in Optical Engineering and Photonics in
Aerospace Sensing. International Society for Optics and Photonics, 1993, pp. 156–164.
[5] Y. Yang and R. S. Blum, “MIMO radar waveform design based on mutual information and minimum
mean-square error estimation,” IEEE Transactions on Aerospace and Electronic Systems, vol. 43, no. 1,
2007.
[6] M. Payaró and D. P. Palomar, “Hessian and concavity of mutual information, differential entropy, and
entropy power in linear vector Gaussian channels,” IEEE Transactions on Information Theory, vol. 55,
no. 8, pp. 3613–3628, 2009.
[7] S. Verdú, “Mismatched estimation and relative entropy,” IEEE Transactions on Information Theory,
vol. 56, no. 8, pp. 3712–3720, 2010.
[8] D. Guo, S. Shamai, and S. Verdú, “Mutual information and conditional mean estimation in Poisson
channels,” IEEE Transactions on Information Theory, vol. 54, no. 5, pp. 1837–1849, 2008.
[9] R. Atar and T. Weissman, “Mutual information, relative entropy, and estimation in the Poisson channel,”
IEEE Transactions on Information Theory, vol. 58, no. 3, pp. 1302–1318, 2012.
[10] L. Wang, D. E. Carlson, M. R. Rodrigues, R. Calderbank, and L. Carin, “A Bregman matrix and the gradient
of mutual information for vector Poisson and Gaussian channels,” IEEE Transactions on Information
Theory, vol. 60, no. 5, pp. 2611–2629, 2014.
[11] S. Boyd and L. Vandenberghe, Convex Optimization. Cambridge University Press, 2004.
[12] S. M. Ross, Stochastic Processes. Wiley New York, 1996, vol. 2.
[13] M. Fahad, “Sensing Methods for Two-Target and Four-Target Detection in Time-Constrained Vector
Poisson and Gaussian Channels,” Ph.D. dissertation, Michigan Technological University, Houghton, MI,
49931-1295, May 2021. [Online]. Available: https://doi.org/10.37099/mtu.dc.etdr/1193
[14] R. W. Yeung, Information Theory and Network Coding. Springer Science Business Media, 2008.
[15] T. M. Cover and J. A. Thomas, Elements of Information Theory. John Wiley Sons, 2012.
[16] T. A. Schonhoff and A. A. Giordano, Detection and Estimation Theory and its Applications. Pearson
College Division, 2006.
[17] J. A. Adell, A. Lekuona, and Y. Yu, “Sharp bounds on the entropy of the Poisson law and related
quantities,” IEEE Transactions on Information Theory, vol. 56, no. 5, pp. 2299–2306, 2010.
[18] L. N. Trefethen and D. Bau III, Numerical Linear Algebra. SIAM, 1997, vol. 50.
Signal Image Processing: An International Journal (SIPIJ) Vol.12, No.6, December 2021
24
68. Muhammad Fahad (M’17) received the Ph.D. and M.S degree in Electrical Engineering
from Michigan Technological University, Houghton, MI in 2021 and 2016, respectively.
He received M.S. degree in Electrical Engineering, from University of Engineering and
Technology, Lahore, Pakistan and B.E. degree (with distinction) in Electronics Engineering
from Mehran University of Engineering and Technology, Pakistan, in 2007 and 2005,
respectively.
He was a Fulbright Ph.D. Scholar during 2011-2017 and currently serving as an Adjunct
Assistant Professor in the Department of Applied Computing, College of Computing at
Michigan Technological University, Houghton, MI. His research interests are in sensor
management and Poisson processes.
Daniel R. Fuhrmann (S’78-M’84-SM’95-F’10) received the B.S.E.E. degree (cum laude)
from Washington University, St. Louis, MO, USA, in 1979 and the M.A., M.S.E., and the
Ph.D. degrees from Princeton University, Princeton, NJ, in 1982 and 1984, respectively.
From 1984 to 2008, he was with the Department of Electrical Engineering, now
the Department of Electrical and Systems Engineering, Washington University. In the
2000-2001 academic year, he was a Fulbright Scholar visiting the Universidad Nacional
de La Plata, Buenos Aires, Argentina. He is currently the Dave House Professor and Chair
of the Department of Applied Computing, Michigan Technological University, Houghton,
MI. His research interests lie in various areas of statistical signal and image processing,
including sensor array signal processing, radar systems, and adaptive sensing.
Dr. Fuhrmann is a former Associate Editor of the IEEE TRANSACTIONS ON SIGNAL PROCESSING. He was
the Technical Program Chairman for the 1998 IEEE Signal Processing Workshop on Statistical Signal and
Array Processing, and General Chairman of the 2003 IEEE Workshop on Statistical Signal Processing. He has
been an ASEE Summer Faculty Research Fellow with the Naval Underwater Systems Center, New London,
CT, a consultant to MIT Lincoln Laboratory, Lexington, MA, and an ASEE Summer Faculty Fellow with the
Air Force Research Laboratory, Dayton, OH.
Signal Image Processing: An International Journal (SIPIJ) Vol.12, No.6, December 2021
25