This document summarizes a technique for diagnosing faults in antenna arrays using compressed sensing on near-field measurement data. The technique aims to reduce measurement time by acquiring data from fewer measurement points compared to traditional methods like back-propagation. It does this by taking the difference between near-field measurements of a reference array without faults and the array under test to create a sparse "innovation vector". This vector is then reconstructed using an L1-norm regularization technique from compressed sensing. Numerical examples on a 289-element array show the technique can accurately detect up to 3 faulty elements from a small number of measurement points on the order of KlogN, where K is the number of faults.
Diagnosis of Faulty Sensors in Antenna Array using Hybrid Differential Evolut...IJECEIAES
In this work, differential evolution based compressive sensing technique for detection of faulty sensors in linear arrays has been presented. This algorithm starts from taking the linear measurements of the power pattern generated by the array under test. The difference between the collected compressive measurements and measured healthy array field pattern is minimized using a hybrid differential evolution (DE). In the proposed method, the slow convergence of DE based compressed sensing technique is accelerated with the help of parallel coordinate decent algorithm (PCD). The combination of DE with PCD makes the minimization faster and precise. Simulation results validate the performance to detect faulty sensors from a small number of measurements.
International Journal of Engineering and Science Invention (IJESI)inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
A statistical approach to spectrum sensing using bayes factor and p-ValuesIJECEIAES
The sensing methods with multiple receive antennas in the Cognitive Radio (CR) device, provide a promising solution for reducing the error rates in the detection of the Primary User (PU) signal. The received Signal to Noise Ratio at the CR receiver is enhanced using the diversity combiners. This paper proposes a statistical approach based on minimum Bayes factors and p-Values as diversity combiners in the spectrum sensing scenario. The effect of these statistical measures in sensing the spectrum in a CR environment is investigated. Through extensive Monte Carlo simulations it is shown that this novel statistical approach based on Bayes factors provides a promising solution to combine the test statistics from multiple receiver antennas and can be used as an alternative to the conventional hypothesis testing methods for spectrum sensing. The Bayesian results provide more accurate results when measuring the strength of the evidence against the hypothesis.
An Efficient top- k Query Processing in Distributed Wireless Sensor NetworksIJMER
Wireless Sensor Networks (WSNs) are usually defined as large-scale, ad-hoc, multi-hop and
wireless unpartitioned networks of homogeneous, small, static nodes deployed in an area of interest.
Applications of sensor networks include monitoring volcano activity, building structures or natural
habitat monitoring. In this paper, we present the problem of processing probabilistic top-k queries in a
distributed wireless sensor networks. The basic problem in top-k query processing is that, a single method
cannot be used as a solution to the problem of top-k query processing because there are many types of
top-k query processing. The method has to be based on the situation, the classification and the type of
database and the query model. Here we develop three algorithms, namely, sufficient set-based (SSB),
necessary set-based (NSB), and boundary-based (BB), for inter- cluster query processing with bounded
rounds of communications. Moreover, in responding to dynamic changes of data distribution in the
overall network, we develop an adaptive algorithm that dynamically switches among the three proposed
algorithms to minimize the transmission cost.
Consistent Nonparametric Spectrum Estimation Via Cepstrum ThresholdingCSCJournals
For stationary signals, there are number of power spectral density estimation techniques. The main problem of power spectral density (PSD)estimation methods is high variance. Consistent estimates may be obtained by suitable processing of the empirical spectrum estimates (periodogram). This may be done using window functions. These methods all require the choice of a certain resolution parameters called bandwidth. Various techniques produce estimates that have a good overall bias Vs variance tradeoff. In contrast, smooth components of this spectral required a wide bandwidth in order to achieve a significant noise reduction. In this paper, we explore the concept of cepstrum for non parametric spectral estimation. The method developed here is based on cepstrum thresholding for smoothed non parametric spectral estimation. The algorithm for Consistent Minimum Variance Unbiased Spectral estimator is developed and implemented, which produces good results for Broadband and Narrowband signals.
Diagnosis of Faulty Sensors in Antenna Array using Hybrid Differential Evolut...IJECEIAES
In this work, differential evolution based compressive sensing technique for detection of faulty sensors in linear arrays has been presented. This algorithm starts from taking the linear measurements of the power pattern generated by the array under test. The difference between the collected compressive measurements and measured healthy array field pattern is minimized using a hybrid differential evolution (DE). In the proposed method, the slow convergence of DE based compressed sensing technique is accelerated with the help of parallel coordinate decent algorithm (PCD). The combination of DE with PCD makes the minimization faster and precise. Simulation results validate the performance to detect faulty sensors from a small number of measurements.
International Journal of Engineering and Science Invention (IJESI)inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
A statistical approach to spectrum sensing using bayes factor and p-ValuesIJECEIAES
The sensing methods with multiple receive antennas in the Cognitive Radio (CR) device, provide a promising solution for reducing the error rates in the detection of the Primary User (PU) signal. The received Signal to Noise Ratio at the CR receiver is enhanced using the diversity combiners. This paper proposes a statistical approach based on minimum Bayes factors and p-Values as diversity combiners in the spectrum sensing scenario. The effect of these statistical measures in sensing the spectrum in a CR environment is investigated. Through extensive Monte Carlo simulations it is shown that this novel statistical approach based on Bayes factors provides a promising solution to combine the test statistics from multiple receiver antennas and can be used as an alternative to the conventional hypothesis testing methods for spectrum sensing. The Bayesian results provide more accurate results when measuring the strength of the evidence against the hypothesis.
An Efficient top- k Query Processing in Distributed Wireless Sensor NetworksIJMER
Wireless Sensor Networks (WSNs) are usually defined as large-scale, ad-hoc, multi-hop and
wireless unpartitioned networks of homogeneous, small, static nodes deployed in an area of interest.
Applications of sensor networks include monitoring volcano activity, building structures or natural
habitat monitoring. In this paper, we present the problem of processing probabilistic top-k queries in a
distributed wireless sensor networks. The basic problem in top-k query processing is that, a single method
cannot be used as a solution to the problem of top-k query processing because there are many types of
top-k query processing. The method has to be based on the situation, the classification and the type of
database and the query model. Here we develop three algorithms, namely, sufficient set-based (SSB),
necessary set-based (NSB), and boundary-based (BB), for inter- cluster query processing with bounded
rounds of communications. Moreover, in responding to dynamic changes of data distribution in the
overall network, we develop an adaptive algorithm that dynamically switches among the three proposed
algorithms to minimize the transmission cost.
Consistent Nonparametric Spectrum Estimation Via Cepstrum ThresholdingCSCJournals
For stationary signals, there are number of power spectral density estimation techniques. The main problem of power spectral density (PSD)estimation methods is high variance. Consistent estimates may be obtained by suitable processing of the empirical spectrum estimates (periodogram). This may be done using window functions. These methods all require the choice of a certain resolution parameters called bandwidth. Various techniques produce estimates that have a good overall bias Vs variance tradeoff. In contrast, smooth components of this spectral required a wide bandwidth in order to achieve a significant noise reduction. In this paper, we explore the concept of cepstrum for non parametric spectral estimation. The method developed here is based on cepstrum thresholding for smoothed non parametric spectral estimation. The algorithm for Consistent Minimum Variance Unbiased Spectral estimator is developed and implemented, which produces good results for Broadband and Narrowband signals.
Classification of Iris Data using Kernel Radial Basis Probabilistic Neural Ne...Scientific Review
Radial Basis Probabilistic Neural Network (RBPNN) has a broader generalized capability that been successfully applied to multiple fields. In this paper, the Euclidean distance of each data point in RBPNN is extended by calculating its kernel-induced distance instead of the conventional sum-of squares distance. The kernel function is a generalization of the distance metric that measures the distance between two data points as the data points are mapped into a high dimensional space. During the comparing of the four constructed classification models with Kernel RBPNN, Radial Basis Function networks, RBPNN and Back-Propagation networks as proposed, results showed that, model classification on Iris Data with Kernel RBPNN display an outstanding performance in this regard.
Clustered Compressive Sensingbased Image Denoising Using Bayesian Frameworkcsandit
This paper provides a compressive sensing (CS) method of denoising images using Bayesian
framework. Some images, for example like magnetic resonance images (MRI) are usually very
weak due to the presence of noise and due to the weak nature of the signal itself. So denoising
boosts the true signal strength. Under Bayesian framework, we have used two different priors:
sparsity and clusterdness in an image data as prior information to remove noise. Therefore, it is
named as clustered compressive sensing based denoising (CCSD). After developing the
Bayesian framework, we applied our method on synthetic data, Shepp-logan phantom and
sequences of fMRI images. The results show that applying the CCSD give better results than
using only the conventional compressive sensing (CS) methods in terms of Peak Signal to Noise
Ratio (PSNR) and Mean Square Error (MSE). In addition, we showed that this algorithm could
have some advantages over the state-of-the-art methods like Block-Matching and 3D
Filtering (BM3D).
Predict the Average Temperatures of Baghdad City by Used Artificial Neural Ne...IJERA Editor
This paper utilizes artificial neural networks (ANN) technique to improve temperature forecast performance of
Baghdad city. Our study based on Feed Forward Backpropagation Artificial Neural Networks (BPANN)
algorithm of which trained and tested by used a real world daily average temperatures of Bagdad city for ten
years past for months of January and July. Aimed at providing forecasts in a schedule, for all Days of the month
to help the meteorologist to foresee future weather temperature accurately and easily. Forecasts by ANN model
has been compared with the actual results and the realistic output (with IMOS). The results has been Compared
to the practical temperature prediction results, and shows that the BPANN forecasts have accuracy that gave
reasonably very good result and can be considered as a good method for temperature predicting..
High-Energy-First (HEF) Heuristic for Energy-Efficient Target Coverage Problemijasuc
Target coverage problem in wireless sensor networks is concerned with maximizing the lifetime of the
network while continuously monitoring a set of targets. A sensor covers targets which are within the
sensing range. For a set of sensors and a set of targets, the sensor-target coverage relationship is
assumed to be known. A sensor cover is a set of sensors that covers all the targets. The target coverage
problem is to determine a set of sensor covers with maximum aggregated lifetime while constraining the
life of each sensor by its initial battery life. The problem is proved to be NP-complete and heuristic
algorithms to solve this problem are proposed. In the present study, we give a unified interpretation of
earlier algorithms and propose a new and efficient algorithm. We show that all known algorithms are
based on a common reasoning though they seem to be derived from different algorithmic paradigms. We
also show that though some algorithms guarantee bound on the quality of the solution, this bound is not
meaningful and not practical too. Our interpretation provides a better insight to the solution techniques.
We propose a new greedy heuristic which prioritizes sensors on residual battery life. We show
empirically that the proposed algorithm outperforms all other heuristics in terms of quality of solution.
Our experimental study over a large set of randomly generated problem instances also reveals that a very
naïve greedy approach yields solutions which is reasonably (appx. 10%) close to the actual optimal
solutions.
Satellite image Compression reduces redundancy in data representation in order to achieve saving in the
cost of storage and transmission image compression compensates for the limited on-board resources, in
terms of mass memory and downlink bandwidth and thus it provides a solution to the (bandwidth vs. data
volume) dilemma of modern spacecraft Thus compression is very important feature in payload image
processing units of many satellites, In this paper, an improvement of the quantization step of the input
vectors has been proposed. The k-nearest neighbour (KNN) algorithm was used on each axis. The three
classifications considered as three independent sources of information, are combined in the framework of
the evidence theory the best code vector is then selected. After Huffman schemes is applied for encoding
and decoding.
A COMPREHENSIVE ANALYSIS OF QUANTUM CLUSTERING : FINDING ALL THE POTENTIAL MI...IJDKP
Quantum clustering (QC), is a data clustering algorithm based on quantum mechanics which is
accomplished by substituting each point in a given dataset with a Gaussian. The width of the Gaussian is a
σ value, a hyper-parameter which can be manually defined and manipulated to suit the application.
Numerical methods are used to find all the minima of the quantum potential as they correspond to cluster
centers. Herein, we investigate the mathematical task of expressing and finding all the roots of the
exponential polynomial corresponding to the minima of a two-dimensional quantum potential. This is an
outstanding task because normally such expressions are impossible to solve analytically. However, we
prove that if the points are all included in a square region of size σ, there is only one minimum. This bound
is not only useful in the number of solutions to look for, by numerical means, it allows to to propose a new
numerical approach “per block”. This technique decreases the number of particles by approximating some
groups of particles to weighted particles. These findings are not only useful to the quantum clustering
problem but also for the exponential polynomials encountered in quantum chemistry, Solid-state Physics
and other applications.
Parametric Comparison of K-means and Adaptive K-means Clustering Performance ...IJECEIAES
Image segmentation takes a major role for analyzing the area of interest in image processing. Many researchers have used different types of techniques for analyzing the image. One of the widely used techniques is K-means clustering. In this paper we use two algorithms K-means and the advance of K-means is called as adaptive K-means clustering. Both the algorithms are using in different types of image and got a successful result. By comparing the Time period, PSNR and RMSE value from the result of both algorithms we prove that the Adaptive K-means clustering algorithm gives a best result as compard to K-means clustering in image segmentation.
LIDAR POINT CLOUD CLASSIFICATION USING EXPECTATION MAXIMIZATION ALGORITHMijnlc
EM algorithm is a common algorithm in data mining techniques. With the idea of using two iterations of E and M, the algorithm creates a model that can assign class labels to data points. In addition, EM not only optimizes the parameters of the model but also can predict device data during the iteration. Therefore, the paper focuses on researching and improving the EM algorithm to suit the LiDAR point cloud classification. Based on the idea of breaking point cloud and using the scheduling parameter for step E to help the algorithm converge faster with a shorter run time. The proposed algorithm is tested with measurement data set in Nghe An province, Vietnam for more than 92% accuracy and has faster runtime than the original EM algorithm.
EM algorithm is a common algorithm in data mining techniques. With the idea of using two iterations of E and M, the algorithm creates a model that can assign class labels to data points. In addition, EM not only optimizes the parameters of the model but also can predict device data during the iteration. Therefore, the paper focuses on researching and improving the EM algorithm to suit the LiDAR point cloud classification. Based on the idea of breaking point cloud and using the scheduling parameter for step E to help the algorithm converge faster with a shorter run time. The proposed algorithm is tested with measurement data set in Nghe An province, Vietnam for more than 92% accuracy and has faster runtime than the original EM algorithm.
Experimental study of Data clustering using k- Means and modified algorithmsIJDKP
The k- Means clustering algorithm is an old algorithm that has been intensely researched owing to its ease
and simplicity of implementation. Clustering algorithm has a broad attraction and usefulness in
exploratory data analysis. This paper presents results of the experimental study of different approaches to
k- Means clustering, thereby comparing results on different datasets using Original k-Means and other
modified algorithms implemented using MATLAB R2009b. The results are calculated on some performance
measures such as no. of iterations, no. of points misclassified, accuracy, Silhouette validity index and
execution time
Classification of Iris Data using Kernel Radial Basis Probabilistic Neural Ne...Scientific Review
Radial Basis Probabilistic Neural Network (RBPNN) has a broader generalized capability that been successfully applied to multiple fields. In this paper, the Euclidean distance of each data point in RBPNN is extended by calculating its kernel-induced distance instead of the conventional sum-of squares distance. The kernel function is a generalization of the distance metric that measures the distance between two data points as the data points are mapped into a high dimensional space. During the comparing of the four constructed classification models with Kernel RBPNN, Radial Basis Function networks, RBPNN and Back-Propagation networks as proposed, results showed that, model classification on Iris Data with Kernel RBPNN display an outstanding performance in this regard.
Clustered Compressive Sensingbased Image Denoising Using Bayesian Frameworkcsandit
This paper provides a compressive sensing (CS) method of denoising images using Bayesian
framework. Some images, for example like magnetic resonance images (MRI) are usually very
weak due to the presence of noise and due to the weak nature of the signal itself. So denoising
boosts the true signal strength. Under Bayesian framework, we have used two different priors:
sparsity and clusterdness in an image data as prior information to remove noise. Therefore, it is
named as clustered compressive sensing based denoising (CCSD). After developing the
Bayesian framework, we applied our method on synthetic data, Shepp-logan phantom and
sequences of fMRI images. The results show that applying the CCSD give better results than
using only the conventional compressive sensing (CS) methods in terms of Peak Signal to Noise
Ratio (PSNR) and Mean Square Error (MSE). In addition, we showed that this algorithm could
have some advantages over the state-of-the-art methods like Block-Matching and 3D
Filtering (BM3D).
Predict the Average Temperatures of Baghdad City by Used Artificial Neural Ne...IJERA Editor
This paper utilizes artificial neural networks (ANN) technique to improve temperature forecast performance of
Baghdad city. Our study based on Feed Forward Backpropagation Artificial Neural Networks (BPANN)
algorithm of which trained and tested by used a real world daily average temperatures of Bagdad city for ten
years past for months of January and July. Aimed at providing forecasts in a schedule, for all Days of the month
to help the meteorologist to foresee future weather temperature accurately and easily. Forecasts by ANN model
has been compared with the actual results and the realistic output (with IMOS). The results has been Compared
to the practical temperature prediction results, and shows that the BPANN forecasts have accuracy that gave
reasonably very good result and can be considered as a good method for temperature predicting..
High-Energy-First (HEF) Heuristic for Energy-Efficient Target Coverage Problemijasuc
Target coverage problem in wireless sensor networks is concerned with maximizing the lifetime of the
network while continuously monitoring a set of targets. A sensor covers targets which are within the
sensing range. For a set of sensors and a set of targets, the sensor-target coverage relationship is
assumed to be known. A sensor cover is a set of sensors that covers all the targets. The target coverage
problem is to determine a set of sensor covers with maximum aggregated lifetime while constraining the
life of each sensor by its initial battery life. The problem is proved to be NP-complete and heuristic
algorithms to solve this problem are proposed. In the present study, we give a unified interpretation of
earlier algorithms and propose a new and efficient algorithm. We show that all known algorithms are
based on a common reasoning though they seem to be derived from different algorithmic paradigms. We
also show that though some algorithms guarantee bound on the quality of the solution, this bound is not
meaningful and not practical too. Our interpretation provides a better insight to the solution techniques.
We propose a new greedy heuristic which prioritizes sensors on residual battery life. We show
empirically that the proposed algorithm outperforms all other heuristics in terms of quality of solution.
Our experimental study over a large set of randomly generated problem instances also reveals that a very
naïve greedy approach yields solutions which is reasonably (appx. 10%) close to the actual optimal
solutions.
Satellite image Compression reduces redundancy in data representation in order to achieve saving in the
cost of storage and transmission image compression compensates for the limited on-board resources, in
terms of mass memory and downlink bandwidth and thus it provides a solution to the (bandwidth vs. data
volume) dilemma of modern spacecraft Thus compression is very important feature in payload image
processing units of many satellites, In this paper, an improvement of the quantization step of the input
vectors has been proposed. The k-nearest neighbour (KNN) algorithm was used on each axis. The three
classifications considered as three independent sources of information, are combined in the framework of
the evidence theory the best code vector is then selected. After Huffman schemes is applied for encoding
and decoding.
A COMPREHENSIVE ANALYSIS OF QUANTUM CLUSTERING : FINDING ALL THE POTENTIAL MI...IJDKP
Quantum clustering (QC), is a data clustering algorithm based on quantum mechanics which is
accomplished by substituting each point in a given dataset with a Gaussian. The width of the Gaussian is a
σ value, a hyper-parameter which can be manually defined and manipulated to suit the application.
Numerical methods are used to find all the minima of the quantum potential as they correspond to cluster
centers. Herein, we investigate the mathematical task of expressing and finding all the roots of the
exponential polynomial corresponding to the minima of a two-dimensional quantum potential. This is an
outstanding task because normally such expressions are impossible to solve analytically. However, we
prove that if the points are all included in a square region of size σ, there is only one minimum. This bound
is not only useful in the number of solutions to look for, by numerical means, it allows to to propose a new
numerical approach “per block”. This technique decreases the number of particles by approximating some
groups of particles to weighted particles. These findings are not only useful to the quantum clustering
problem but also for the exponential polynomials encountered in quantum chemistry, Solid-state Physics
and other applications.
Parametric Comparison of K-means and Adaptive K-means Clustering Performance ...IJECEIAES
Image segmentation takes a major role for analyzing the area of interest in image processing. Many researchers have used different types of techniques for analyzing the image. One of the widely used techniques is K-means clustering. In this paper we use two algorithms K-means and the advance of K-means is called as adaptive K-means clustering. Both the algorithms are using in different types of image and got a successful result. By comparing the Time period, PSNR and RMSE value from the result of both algorithms we prove that the Adaptive K-means clustering algorithm gives a best result as compard to K-means clustering in image segmentation.
LIDAR POINT CLOUD CLASSIFICATION USING EXPECTATION MAXIMIZATION ALGORITHMijnlc
EM algorithm is a common algorithm in data mining techniques. With the idea of using two iterations of E and M, the algorithm creates a model that can assign class labels to data points. In addition, EM not only optimizes the parameters of the model but also can predict device data during the iteration. Therefore, the paper focuses on researching and improving the EM algorithm to suit the LiDAR point cloud classification. Based on the idea of breaking point cloud and using the scheduling parameter for step E to help the algorithm converge faster with a shorter run time. The proposed algorithm is tested with measurement data set in Nghe An province, Vietnam for more than 92% accuracy and has faster runtime than the original EM algorithm.
EM algorithm is a common algorithm in data mining techniques. With the idea of using two iterations of E and M, the algorithm creates a model that can assign class labels to data points. In addition, EM not only optimizes the parameters of the model but also can predict device data during the iteration. Therefore, the paper focuses on researching and improving the EM algorithm to suit the LiDAR point cloud classification. Based on the idea of breaking point cloud and using the scheduling parameter for step E to help the algorithm converge faster with a shorter run time. The proposed algorithm is tested with measurement data set in Nghe An province, Vietnam for more than 92% accuracy and has faster runtime than the original EM algorithm.
Experimental study of Data clustering using k- Means and modified algorithmsIJDKP
The k- Means clustering algorithm is an old algorithm that has been intensely researched owing to its ease
and simplicity of implementation. Clustering algorithm has a broad attraction and usefulness in
exploratory data analysis. This paper presents results of the experimental study of different approaches to
k- Means clustering, thereby comparing results on different datasets using Original k-Means and other
modified algorithms implemented using MATLAB R2009b. The results are calculated on some performance
measures such as no. of iterations, no. of points misclassified, accuracy, Silhouette validity index and
execution time
Ill-posedness formulation of the emission source localization in the radio- d...Ahmed Ammar Rebai PhD
To contact the authors : tarek.salhi@gmail.com and ahmed.rebai2@gmail.com
In the field of radio detection in astroparticle physics, many studies have shown the strong dependence of the solution of the radio-transient sources localization problem (the radio-shower time of arrival on antennas) such solutions are purely numerical artifacts. Based on a detailed analysis of some already published results of radio-detection experiments like : CODALEMA 3 in France, AERA in Argentina and TREND in China, we demonstrate the ill-posed character of this problem in the sens of Hadamard. Two approaches have been used as the existence of solutions degeneration and the bad conditioning of the mathematical formulation problem. A comparison between experimental results and simulations have been made, to highlight the mathematical studies. Many properties of the non-linear least square function are discussed such as the configuration of the set of solutions and the bias.
A combined spectrum sensing method based DCT for cognitive radio system IJECEIAES
In this paper a new hybrid blind spectrum sensing method is proposed. The method is designed to enhance the detection performance of Conventional Energy Detector (CED) through combining it with a proposed sensing module based on Discrete Cosine Transform (DCT) coefficient’s relationship as operation mode at low Signal to Noise Ratio (SNR) values. In the proposed sensing module a certain factor called Average Ratio (AR) represent the ratio of energy in DCT coefficients is utilized to identify the presence of the Primary User (PU) signal. The simulation results show that the proposed method improves PU detection especially at low SNR values.
Dear Student,
DREAMWEB TECHNO SOLUTIONS is one of the Hardware Training and Software Development centre available in
Trichy. Pioneer in corporate training, DREAMWEB TECHNO SOLUTIONS provides training in all software
development and IT-related courses, such as Embedded Systems, VLSI, MATLAB, JAVA, J2EE, CIVIL,
Power Electronics, and Power Systems. It’s certified and experienced faculty members have the
competence to train students, provide consultancy to organizations, and develop strategic
solutions for clients by integrating existing and emerging technologies.
ADD: No:73/5, 3rd Floor, Sri Kamatchi Complex, Opp City Hospital, Salai Road, Trichy-18
Contact @ 7200021403/04
phone: 0431-4050403
Elements Space and Amplitude Perturbation Using Genetic Algorithm for Antenna...CSCJournals
A simple and fast genetic algorithm (GA) developed to reduce the sidelobes in non-uniformly spaced linear antenna arrays. The proposed GA algorithm optimizes two vectors of variables to increase the Main lobe to Sidelobe power ratio (M/S) of array’s radiation pattern. The algorithm, in the first phase calculates the positions of the array elements and in the second phase, it manipulates the amplitude of excitation signals for each element. The simulations performed for 16 and 24 elements array structure. The results indicated that M/S improved in first phase from 13.2 to over 22.2dB meanwhile the half power beamwidth (HPBW) left almost unchanged. After element replacement, in the second phase, by using amplitude tapering further improvement up to 32dB was achieved. Also, the simulations shown that after element space perturbation, some antenna elements can be merged together without any performance degradation in radiation pattern in terms of gain and sidelobes level.
Space-time adaptive processing (STAP) is a signal
processing technique most commonly used in radar systems where
interference is a problem. The radar signal processor is used to
remove the unintentional cluttering effects caused by ground
reflections and echoes due to sea, desert, forest, etc. and intentional
jamming and make the received signal useful. In this paper a new
approach to STAP based on subspace projection has been described
in detail. According to linear algebra and three dimensional
geometry, if we project a range space on to a subspace spanned by
linearly independent vectors, we can suppress data which is
perpendicular to that subspace. In subspace based technique, the
received data is projected on to a subspace which is orthogonal to
clutter subspace to remove the clutter. The probability of target
detection can be find out in order to analyse the performance of the
proposed algorithm. Two existing algorithms, SMI and DPCA are
chosen to do the comparison. while plotting the detection Probability
against SINR , the results obtained are better for subspace technique
than DPCA and SMI. We got the SINR improved for subspace based
technique for same detection probability. The effect of subspace rank
on SINR was also analysed for understanding the computational load
caused by the technique. We also analysed the convergence of the
algorithm by taking plots of SINR against range snapshots.
Performance of Matching Algorithmsfor Signal Approximationiosrjce
IOSR Journal of Electronics and Communication Engineering(IOSR-JECE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of electronics and communication engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in electronics and communication engineering. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Expert system design for elastic scattering neutrons optical model using bpnnijcsa
In present paper, a proposed expert system is designed to obtain a trained formulae for the optical model
parameters used in elastic scattering neutrons of light nuclei for (7Li), at energy range between [(1) to
(20)] MeV. A simple algorithm has used to design this expert system, while a multi-layer backwardpropagation
neural network (BPNN) is applied for training and testing the data used in this model. This
group of formulae may get a simple expert system occurring from governing formulae model, and predicts
the critical parameters usually resulted from the complicated computer coding methods. This expert system
may use in nuclear reactions yields in both fission and fusion nature who gives more closely results to the
real model.
Support Recovery with Sparsely Sampled Free Random Matrices for Wideband Cogn...IJMTST Journal
The main objective of this project is to design an eigenvalue-based compressive SOE technique using asymptotic random matrix theory. In this project, investigating blind sparsity order estimation (SOE) techniques is an open research issue. To address this, this project presents an eigenvalue-based compressive SOE technique using asymptotic random matrix theory. Finally, this project propose a technique to estimate the sparsity order of the wideband spectrum with compressive measurements using the maximum eigenvalue of the measured signal's covariance matrix. .
Comparitive analysis of doa and beamforming algorithms for smart antenna systemseSAT Journals
Abstract This paper revolves around the implementation of Direction of arrival and Adaptive beam-forming algorithms for Smart Antenna Systems. This paper also investigates the implementation of algorithms on various planner array geometries viz. circular and rectangular. Music algorithm is primarily finds the possible location of desired user and adaptive beam-forming algorithms such as LMS, RLS and CMA algorithms adapts the weights of the array. DOA estimation gives the maximum peak of spectrum with respect to angle of arrival where the desired user is supposed to exist. After DOA estimation weights of array antenna are changed with the changing received signal. This methodology is called as Spectral estimation, which allows the antenna pattern to steer in desired direction estimated by DOA and simultaneously null out the interfering signals. Rate of convergence is the major criterion for comparison for adaptive beam-forming algorithms. Keywords: DOA, MUSIC, LMS, RLS, CMA, SAS.
Estimation of global solar radiation by using machine learning methodsmehmet şahin
In this study, global solar radiation (GSR) was estimated based on 53 locations by using ELM, SVR, KNN, LR and NU-SVR methods. Methods were trained with a two-year data set and accuracy of the mentioned methods was tested with a one-year data set. The data set of each year was consisting of 12 months. Whereas the values of month, altitude, latitude, longitude, vapour pressure deficit and land surface temperature were used as input for developing models, GSR was obtained as output. Values of vapour pressure deficit and land surface temperature were taken from radiometry of NOAA-AVHRR satellite. Estimated solar radiation data were compared with actual data that were obtained from meteorological stations. According to statistical results, most successful method was NU-SVR method. The RMSE and MBE values of NU-SVR method were found to be 1,4972 MJ/m2 and 0,2652 MJ/m2, respectively. R value was 0,9728. Furthermore, worst prediction method was LR. For other methods, RMSE values were changing between 1,7746 MJ/m2 and 2,4546 MJ/m2. It can be seen from the statistical results that ELM, SVR, k-NN and NU-SVR methods can be used for estimation of GSR.
Smart Antenna is a device with signal processing
capability combining multiple antenna elements to optimize its
radiation and reception patterns as per designed specifications.
Smart antennas basically comprise of two functionalities, i.e.,
Direction of Arrival and Beamforming. This paper explains the
estimation of Direction of Arrival using MLM method and a
novel approach called MUltiple Signal Classification which takes
advantage of orthogonal property and performs subspace
computation. With a comparative study of both the algorithms,
we shall prove the advantages of MUltiple Signal Classification
over the MLM method with the aid of MATLAB
This paper presented a compressive sensing (CS) calibration technique that simultaneously measures all the element excitations of a 2×2 active phased antenna array (APAA) instantaneously. The power patterns of the rectangular 2×2 APAA element are synchronized to orthogonally cross multiplicative sub-array system applying compressive sensing in achieving an array thinning along two 1-D sub-arrays for a fixed steered beam radiation. The far field radiation pattern direction of the 2×2 APAA is examined considering the mutual effect, complex excitation of the amplitude and phase of the antenna elements placed on the x and y axes. The unwanted array excitation with errors are calibrated using minimization vector as a standard basic pursuit problem in compressive sensing technique for the 2×2 APAA calibration. The elements amplitude and phase shift variation are compared with reference state 0 in order to evaluate the faulty error array elements. The performance evaluation of this proposed technique is measured and demonstrated to validate the effectiveness of the proposed antenna calibration technique.
Similar to Array diagnosis using compressed sensing in near field (20)
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Array diagnosis using compressed sensing in near field
1. Journal of Information Engineering and Applications www.iiste.org
ISSN 2224-5782 (print) ISSN 2225-0506 (online)
Vol 2, No.7, 2012
Array Diagnosis using Compressed Sensing in Near Field
K.Ch.Sri Kavya1, K.Sarat Kumar2, Habibulla Khan3, T. Satish Chandra Gupta4, K.Varun Deep4, K.
Yasaswi4, Ch.S.Venu Madhav4
1. Women Scientist, Department of ECE, KL University, Guntur, INDIA – 522502
2. Associate Dean, Sponsored Research, KL University, Guntur, INDIA – 522502
3. Professor & HOD, Department of ECE, KL University, Guntur, INDIA – 522502
4. Final Year B.Tech, Department of ECE, KL University, Guntur, INDIA - 522502
* E-mail of the corresponding author: kavya@kluniversity.in
Abstract
This paper will present a technique for array diagnosis using a small number of measured data acquired
by a near-field system by making use of the concepts of compressed sensing technique in image
processing. Here, the high cost of large array diagnosis in near-field facilities is mainly caused by the
time required for the data acquisition. So there is a need to decrease the measurement time and at the
same time the reconstruction of an array must be satisfactory. The proposed technique uses less number
of measurement points compared to other proposed techniques like back-propagation method and
standard matrix inversion method.
Keywords: Arrays, Compressed sensing, near field
I. INTRODUCTION
Near field facilities are nowadays routinely used for array testing. Besides the evaluation of the
radiation pattern, another important application is the diagnosis of arrays. The most commonly used
method for planar arrays is the back propagation algorithm [1], that is based on the Fourier relationship
between the field on the array aperture and on the measurement plane .In order to obtain a sufficiently
high resolution, it is necessary to acquire the data on an area in front of the antenna whose dimension
allows to collect all the relevant energy radiated by the antenna, otherwise the resolution is seriously
degraded.
Using the standard half-wavelength measurement step, the number of measurement points turns
out to be very large. Data acquisition time is consequently the main limitation factor of such a
technique. For large arrays the data acquisition could require a significant amount of time, and high
costs.
One of the important possible ways to decrease the measurement time is by reducing the number
of data. In the last couple of decades an effective theory to reduce the set of data in antenna
measurements under a-prior knowledge of the shape of the antenna under test has been developed [4].
In particular, considering the class of antennas contained in a given convex surface, the method
allows evaluating the minimum number of information of such a class of radiating sources, assuring a
reconstruction of the field from the acquired data within any desired error level. This method has been
successfully applied to near-field measurements to reduce the set of data [5]. Using other a-priori
information, for example on the spatial correlation of the sources, it is possible to further decrease the
number of measurements [6].
With reference to array diagnosis, the goal is to identify the fault elements in the grid of radiating
elements. Hopefully, the number of fault elements is small, and this suggests considering as excitation
coefficient the difference between the excitation of a reference failure-free array, and the array under
test. Such a “new” antenna is a very sparse array, with a small number of radiating elements, allowing a
sensible decrease in the amount of information required to identify the (low number of) failures.
The failure-identification algorithm takes advantage of some recent results obtained in the field of
compressed sensing, which allows the harmonic estimation of sparse signals using a small set of data
[7], [9]. It is useful to stress that there is a strict connection between Fourier analysis and array
synthesis. This connection allows to use super-resolution algorithms developed for harmonic
estimation, like MUSIC and ESPRIT, to perform spatial processing.
II. DESCRIPTION OF THE METHOD
Let us consider an Array under Test (AUT) consisting of N radiating elements located in known ݎ
positions. Let ݔ and ݂ ሺߠ, ߔሻ be the excitation coefficient and the electric-field radiation pattern of
the nth radiating element, respectively. A probe having effective height ݄ሺߠ, ߔሻ is placed in M spatial
points. The voltage at the probe output can be expressed by the linear system
1
2. Journal of Information Engineering and Applications www.iiste.org
ISSN 2224-5782 (print) ISSN 2225-0506 (online)
Vol 2, No.7, 2012
ܺܣൌ ܻ
In array diagnosis the goal is to identify the fault elements. The number of the fault elements will
be denoted by K. In [2] this goal is reached by inverting the system (1). However, this requires that M
≥ N. In this contribution instead the goal is to identify the failures with M<<N, taking advantage that
usually K<<N.
If we consider the set of the arrays that must be tested, the excitation vectors are highly correlated,
since a (hopefully) small number of elements are broken in each array. This suggests to consider the
innovation vector, obtained subtracting the vector of the excitations of a fault-free array and the vector
of the excitation of the AUT. This method parallels a well known method used in video signal
processing based on encoding only the innovation in a frame of an image.
If we are only interested in the detection of fault elements, we simply perform the measurements
in M points on the measurement plane of the field radiated by the failure-free array, let ݕ the vector
collecting the measured data, wherein the apex stands for ‘reference’. And ݔ be the excitation vector
of reference array.
Then the field radiated by the AUT is measured, obtaining the ݕௗ output vector. In the
following the vector of the excitations of the AUT will be denoted as ݔௗ .
We consider the new system
ܺܣൌ ܻ (1)
Where ݔൌ ݔ െ ݔௗ
yൌ ݕ െ ݕௗ
are “innovation” vectors. Note that if the number of fault elements K is much smaller than N (as
usually happens) we have an equivalent problem involving a highly sparse array. We suppose that
M<<N. In this case the problem is ill posed, and a regularization procedure is required.
The standard approach for matrix inversion regularization is to introduce a-priori information in the
inversion. This can be obtained adding a penalty weight related to the norm of the X vector. In
particular, a possible choice is the constraint minimization:
௫ ||ܺ|| Subject to ห| ܺܣെ ܻ|หଶ ≪ Ԫ (2)
Where in p stands for the ݈ norm, the subscript 2 stands for ݈ଶ norm, and is the expected error
level due to the noise and the measurement uncertainties.
Usually, the energy of X is used as a-priori information, and the usually adopted norms are .
However, in our case a strong a-priori information is the sparsification of the array. In absence of
noise, the best choice should be the following constraint minimization:
Subject to (3)
Where in is the so called ‘zero-norm’, equal to the number of non-zero elements of X.
The zero-norm forces the solution to be sparsified. Norms with have this property .
However, gives non convex minimization problems, whose numerical solution is cumbersome.
For this reason, the norm is the most interesting candidate to force a sparsification of the solution
[9], [10].
Accordingly, we solve the following constraint minimization:
Subject to (4)
Whose solution has a good stability in presence of noise. Furthermore, from a computational point of
view, the above minimization is a convex problem, that has a unique minimum, and for which efficient
algorithms are available [13].
The different regularization procedures are available in [13].Broadly speaking, all the
regularization techniques use a-priori information on the problem to restrict the search space,
increasing the probability of selecting the right solution. The available a-priori information generally
has different “degree of accuracy.” The “best” regularization technique is the one that gives priority to
the most “accurate” a-priori information. In our specific case, the value of the norm of the solution is
not known a-priori, so that the best choice is to use a-prior information on the noise level.
Summarizing, the strategy proposed in (4) is simple and effective, and requires a minimum
amount of a-priori information (i.e., the noise level affecting the data).
In array diagnosis, it must be noted that in case of sparse far-field measurements, A has indeed
such a basic structure, and we must expect that measurements are required. In case of
2
3. Journal of Information Engineering and Applications www.iiste.org
ISSN 2224-5782 (print) ISSN 2225-0506 (online)
Vol 2, No.7, 2012
near-field measurements the structure of A is more complex, and at the knowledge of the author no
exact results are available in the literature.
However, the results obtained from a large number of numerical simulations, some of them
reported in Section III, suggest that the dimensionality of the measurement space increases linearly
with the sparsity coefficient K, but only logarithmically with the “signal domain” dimensionality N.
Finally, in real arrays the fault elements are often organized in C clusters. Accordingly, a more
appropriate model is the so called (K, C) -sparse model, in which K-sparse signals are contained within
at most C-clusters. In this case a smaller number of measurements compared to the non-clustered case
is generally required [12].
III. NUMERICAL EXAMPLE
We consider a uniform planar array of N=17x17=289 radiating elements consisting of
short dipoles parallel to the axis placed on a uniform lattice with uniform spacing .All the excitation
coefficients of array are equal to one. The electric field is measured using the ideal probe placed on a
plane parallel to the aperture of the array. Finally 35dB Gaussian Noise is added to the data in order to
simulate Noisy Measurements.
III.1 The CDF of the Excitation Amplitude without Noise
Consider all the excitation coefficients of array amplitudes are equal to one. The cumulative
distribution function for the estimated excitation amplitudes of array without Noise is as follows
Fig1 CDF of the Estimated Excitation Amplitude
III.2 The CDF of the Estimated Excitation Amplitude with Noise
III.2.a No Broken Elements in the Array
In order to simulate Noisy measurements, the Gaussian Noise of 35dB is added to the data. Here
another consideration is that there is no broken elements in the array i.e. all the excitation coefficients
amplitude are equal to one.
The cumulative distribution function was found to the noise added data with all the elements
amplitudes are equal to 1.The below plot drawn between estimated Excitation amplitude vs. CDF
shows the CDF of Noise added data in case of no broken elements.
3
4. Journal of Information Engineering and Applications www.iiste.org
ISSN 2224-5782 (print) ISSN 2225-0506 (online)
Vol 2, No.7, 2012
Fig2.The CDF of Noise added Data in Case of No Broken elements
III.2.b Considering Broken Elements of Array
In this case we consider that some of the elements of array are broken i.e. excitation coefficient equal
to zero. The measurements are done by taking number of failures randomly from 0 to 3.
The number of measurement points must be in the order of KlogN .Where k=3 number of failures and
N=289 the total number of elements of array. The CDF for the Estimated Excitation amplitudes of
broken elements are plotted and the graph as follows
Fig3. CDF of the Estimated Excitation amplitude having broken elements.
III.3 Rice Distribution
For comparison, Let us consider the rician distribution which has a none zero mean .In finding CDF,
the rician distribution gives good results.
The Rice Distribution R(v,σ) where and , are
statistically independent Gaussian random variables with common variance and mean and
with .
III.3.a CDF of Rice Distribution in Case of No broken Elements
The CDF of the R(0.998,0.034) are plotted showing the good correspondence .The following plot gives
the good comparison of CDF of Rice with the Ideal Curve and CDF of Noise added Data. The
reconstructed excitation in absence of failure is very close to one in mean (0.998), with a variance
0.011 ( dB, i.e., at almost the same level of the noise level -35dB).
4
5. Journal of Information Engineering and Applications www.iiste.org
ISSN 2224-5782 (print) ISSN 2225-0506 (online)
Vol 2, No.7, 2012
Fig4. Comparison between CDF of Ideal, Rician, and Noise added data in Case of no broken elements.
III.3.a CDF of Rice Distribution in Case of broken Elements
In the case of broken elements (i.e. excitation amplitude is zero) also, we consider the CDF of the
Rician Distribution for Comparison .Here the CDF for the R(0.59,0.20) having the mean 0.62 and
0.036( ) is plotted. In this case the reconstruction is less satisfactory compared to the
no-failure case. This limits the use of the technique for accurate reconstruction of the excitations of
fault elements, at least in the case of so few measurement points.
Fig 5. Comparison of CDF of Rician for broken and no broken Cases with the ideal Curve
However, the results suggest that the method is well suited for pass-fail test of the array. For example,
using a threshold of 0.85 the probability of false alarm and the probability of a missed failure are both
around 5%. A higher threshold allows to further decrease the probability of a missed failure, accepting
a higher probability of false alarm. Of course, in case of detection of failures, it is possible to increase
the number of measurements, in order to confirm the presence of failures and to obtain a more accurate
reconstruction of their excitation coefficients.
IV. CONCLUSION
This paper discusses in detail a simple method for array diagnosis that allows significant decrease in the
number of measurements compared to that of elements of array. This method is worthy in the case of
large arrays where measurement time is usually very high. The key point of the method is the use of a
regularization procedure minimizing the 1-norm of the difference vector between a failure-free
excitation vector and the excitation vector of the AUT. This allows to obtain an equivalent sparse array,
discarding the pieces of information not of interest for the failure identification problem. The proposed
Technique is related to some of the results obtained in ‘Compressed Sensing’ usually preferred in data
or Image processing.
5
6. Journal of Information Engineering and Applications www.iiste.org
ISSN 2224-5782 (print) ISSN 2225-0506 (online)
Vol 2, No.7, 2012
Further investigations can be done by applying the technique for a maximum number of failures in an
array case and then the technique can be evaluated for finding its limitations on accuracy level.
REFERENCES
[1] J. J. Lee, E. M. Ferrer, D. P.Woollen, and K. M. Lee, “Near-field probe used as a diagnostic tool to
locate defective elements in an array antenna,” IEEE Trans. Antennas Propag., vol. AP-36, no. 3, pp.
884–889, Jun. 1988.
[2] O. M. Bucci, M. D. Migliore, G. Panariello, and G. Sgambato, “Accurate diagnosis of conformal
arrays from near-field data using the matrix method,” IEEE Trans. Antennas Propag., vol. 53, no. 3,
pp.1114–1120, Mar. 2005.
[3] A. Buonanno and M. D’Urso, “On the diagnosis of arbitrary geometry fully active arrays,”
presented at the EuCAP Eur. Conf. on Antennas and Propagation, Barcelona, Apr. 12–16, 2010.
[4] O. M. Bucci, C. Gennarelli, and C. Savarese, “Representation of electromagnetic fields over
arbitrary surfaces by a finite and non redundant Number of samples,” IEEE Trans. Antennas Propag.,
vol. 46, pp. 361–369, Mar. 1998.
[5] O. M. Bucci and M. D. Migliore, “A new method to avoid the Truncation error in near-field
antennas measurements,” IEEE Trans. Antennas Propag., vol. AP-54, no. 10, pp. 2940–2952, Oct.
2006.
[6] M. D. Migliore, “An informational theoretic approach to the Microwave tomography,” presented at
the IEEE Antennas and Propagation Symp., San Diego, 2008.
[7] E. Candes and T. Tao, “Near optimal signal recovery from random projections: Universal encoding
strategies?,” IEEE Trans. Inf. Theory, vol.52, no. 12, pp. 5406–5425, Dec. 2006.
[8] M. Akcakaya and V. Tarokh, “Shannon-theoretic limits on noisy compressive sensing,” IEEE
Trans. Inf. Theory, vol. 56, no. 1, pp. 492–504, Jan. 2010.
[9] D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory, vol. 52, no. 4, pp. 1289–1306,
Apr. 2006.
[10] E. l. J. Candes, J. R. Member, and T. Tao, “Robust uncertainty principles: exact signal
reconstruction from highly incomplete frequency Information,” IEEE Trans. Inf. Theory, vol. 52, no. 2,
pp. 489–509, Feb.2006.
[11] E. J. Candes, M. B. Wakin, and S. P. Boyd, “Enhancing sparsity by reweighted minimization,” J.
Fourier Analysis Applica., vol. 14, pp.877–905, Oct. 2008.
[12] R. C. Baraniuk, V. Cevher, and M. B. Wakin, “Low-dimensional models for dimensionality
reduction and signal recovery, a geometrical perspective,” Proc. IEEE, vol. 98, no. 6, pp. 959–971,
Jun. 2010.
[13] O. M. Bucci, G. D’Elia, and M. D. Migliore, “A new strategy to reduce the truncation error in
near-field far-field transformations,” Radio Sci.,vol. 35, no. 1, pp. 3–17, Jan.–Feb. 2000.
[14] O. M. Bucci and G. Franceschetti, “On the degrees of freedom of scattered fields,” IEEE Trans.
Antennas Propag., vol. AP-38, pp. 918–926,1989.
6
7. This academic article was published by The International Institute for Science,
Technology and Education (IISTE). The IISTE is a pioneer in the Open Access
Publishing service based in the U.S. and Europe. The aim of the institute is
Accelerating Global Knowledge Sharing.
More information about the publisher can be found in the IISTE’s homepage:
http://www.iiste.org
The IISTE is currently hosting more than 30 peer-reviewed academic journals and
collaborating with academic institutions around the world. Prospective authors of
IISTE journals can find the submission instruction on the following page:
http://www.iiste.org/Journals/
The IISTE editorial team promises to the review and publish all the qualified
submissions in a fast manner. All the journals articles are available online to the
readers all over the world without financial, legal, or technical barriers other than
those inseparable from gaining access to the internet itself. Printed version of the
journals is also available upon request of readers and authors.
IISTE Knowledge Sharing Partners
EBSCO, Index Copernicus, Ulrich's Periodicals Directory, JournalTOCS, PKP Open
Archives Harvester, Bielefeld Academic Search Engine, Elektronische
Zeitschriftenbibliothek EZB, Open J-Gate, OCLC WorldCat, Universe Digtial
Library , NewJour, Google Scholar