The problem of change-point detection has been well studied and adopted in many signal processing applications. In such applications, the informative segments of the signal are the
stationary ones before and after the change-point. However, for some novel signal processing and machine learning applications such as Non-Intrusive Load Monitoring (NILM), the information contained in the non-stationary transient intervals is of equal or even more importance to the recognition process. In this paper, we introduce a novel clustering-based sequential detection of abrupt changes in an aggregate electricity consumption profile with
accurate decomposition of the input signal into stationary and non-stationary segments. We also introduce various event models in the context of clustering analysis. The proposed algorithm is applied to building-level energy profiles with promising results for the residential BLUED power dataset.
Approaches to online quantile estimationData Con LA
Data Con LA 2020
Description
This talk will explore and compare several compact data structures for estimation of quantiles on streams, including a discussion of how they balance accuracy against computational resource efficiency. A new approach providing more flexibility in specifying how computational resources should be expended across the distribution will also be explained. Quantiles (e.g., median, 99th percentile) are fundamental summary statistics of one-dimensional distributions. They are particularly important for SLA-type calculations and characterizing latency distributions, but unlike their simpler counterparts such as the mean and standard deviation, their computation is somewhat more expensive. The increasing importance of stream processing (in observability and other domains) and the impossibility of exact online quantile calculation together motivate the construction of compact data structures for estimation of quantiles on streams. In this talk we will explore and compare several such data structures (e.g., moment-based, KLL sketch, t-digest) with an eye towards how they balance accuracy against resource efficiency, theoretical guarantees, and desirable properties such as mergeability. We will also discuss a recent variation of the t-digest which provides more flexibility in specifying how computational resources should be expended across the distribution. No prior knowledge of the subject is assumed. Some familiarity with the general problem area would be helpful but is not required.
Speaker
Joe Ross, Splunk, Principal Data Scientist
This document describes using sequential Monte Carlo methods like the sequential importance sampling (SIS) filter, sequential importance resampling (SIR) filter, and bootstrap filter to estimate parameters of linear time-invariant systems subjected to non-stationary earthquake excitations. It presents simulations applying these filters to identify parameters of a single-degree-of-freedom oscillator and a 3-story shear building model using synthetic earthquake data. The performance of different filters and resampling algorithms are compared based on identified natural frequencies and parameter convergence.
Deep learning models can be improved for physical processes by incorporating prior scientific knowledge. The paper proposes a method where a neural network predicts parameters like motion fields in governing equations, rather than directly predicting outputs. It applies this to ocean surface temperature prediction. The model predicts a motion field from past temperature images using a CNN. It then uses this motion field in a warping scheme based on the advection-diffusion equation to forecast future temperature. This outperforms comparison methods by leveraging physics knowledge without requiring manual specification of equations.
This document is a project report submitted for the degree of Bachelor of Technology in Civil Engineering at IIT Guwahati. It investigates using sequential Markov Chain Monte Carlo (MCMC) simulation based algorithms, also known as particle filters, for parameter estimation of linear time-invariant (LTI) structural systems subjected to non-stationary earthquake excitations. The report describes implementing Sequential Importance Sampling (SIS), Sequential Importance Resampling (SIR), and Bootstrap filters to identify stiffness and damping parameters of a 3-story shear building model and a multi-story reinforced concrete framed building (BRNS building) at IIT Guwahati. It compares the performance of these filters based on identified natural frequencies and
This document summarizes a research paper that estimates the scale parameter of the Nakagami distribution using Bayesian methods. The paper derives the posterior distributions of the scale parameter under different prior distributions, including uniform, inverse exponential, and Levy priors. It then finds the Bayesian estimators of the scale parameter under three loss functions: squared error loss function, quadratic loss function, and precautionary loss function. The paper uses Monte Carlo simulations to compare the performance of the different estimators.
Extended Kalman observer based sensor fault detectionIJECEIAES
This article discusses the Kalman observer based fault detection approach. The calculation of the residues can detect faults, but if there are noises, uncertainties become very important. To reduce the influence of these noises, a calculation of the instantaneous energy of the residues gave a better precision. The Kalman observer was used to estimate system performance and eliminate unknown noise and external disturbances. Instantaneous Power Calculation (IPCFD) based fault detection can detect potential sensor faults in hybrid systems. The effectiveness of the proposed approach is illustrated by the main application.
Shunt Faults Detection on Transmission Line by Waveletpaperpublications3
Abstract: Transmission line fault detection is a very important task because major portion of power system fault occurring in transmission system. This paper represents a fast and reliable method of transmission line shunt fault detection. MATLAB Simulink use for modeled an IEEE 9-bus test power system for case study of various faults. In proposed work Daubechies wavelet is applied for decomposition of fault transients. The application of wavelet analysis helps in accurate classification of the various fault patterns. Wavelet entropy measure based on wavelet analysis is able to observe the unsteady signals and complexity of the system at time-frequency plane.
The result shows that the proposed method is capable to detect all the shunt faults.
Using the Componentwise Metropolis-Hastings Algorithm to Sample from the Join...Thomas Templin
Markov Chain Monte Carlo (MCMC) methods provide a way to sample from a distribution (e.g., the joint posterior distribution for the parameters of a Bayesian model). These methods are useful when analytic solutions for parameter estimations do not exist. If the Markov chain is long, the sampled random variables are (approximately) identically distributed, but they are not independent because in a Markov chain each random variable depends on the previous one. However, because the Ergodic Theorem applies to MCMC methods, the chains converge (with probability one) to the stationary distribution, which for our purposes is the Bayesian joint posterior distribution.
MCMC methods are frequently implemented using a Gibbs sampler. This, however, requires knowledge of the parameters' conditional distributions, which are frequently not available. In this case, another MCMC method, called the Metropolis-Hastings algorithm, can be used. The Metropolis-Hastings algorithm is a type of acceptance/rejection method. It requires a candidate-generating distribution, also called proposal distribution. Ideally, the proposal distribution should be similar to the posterior distribution, but any distribution with the same support as the posterior is possible.
The Metropolis-Hastings algorithm generalizes to multidimensional distributions. In the multidimensional case, there are two types of algorithms ― the "regular" algorithm and the "componentwise" algorithm. Whereas the "regular" algorithm computes a full proposal vector at each step, the "componentwise" algorithm, which is implemented here for a binomial regression model, updates each component at a time, so that the proposals for all the components are evaluated, i.e., accepted or rejected, in turn.
Approaches to online quantile estimationData Con LA
Data Con LA 2020
Description
This talk will explore and compare several compact data structures for estimation of quantiles on streams, including a discussion of how they balance accuracy against computational resource efficiency. A new approach providing more flexibility in specifying how computational resources should be expended across the distribution will also be explained. Quantiles (e.g., median, 99th percentile) are fundamental summary statistics of one-dimensional distributions. They are particularly important for SLA-type calculations and characterizing latency distributions, but unlike their simpler counterparts such as the mean and standard deviation, their computation is somewhat more expensive. The increasing importance of stream processing (in observability and other domains) and the impossibility of exact online quantile calculation together motivate the construction of compact data structures for estimation of quantiles on streams. In this talk we will explore and compare several such data structures (e.g., moment-based, KLL sketch, t-digest) with an eye towards how they balance accuracy against resource efficiency, theoretical guarantees, and desirable properties such as mergeability. We will also discuss a recent variation of the t-digest which provides more flexibility in specifying how computational resources should be expended across the distribution. No prior knowledge of the subject is assumed. Some familiarity with the general problem area would be helpful but is not required.
Speaker
Joe Ross, Splunk, Principal Data Scientist
This document describes using sequential Monte Carlo methods like the sequential importance sampling (SIS) filter, sequential importance resampling (SIR) filter, and bootstrap filter to estimate parameters of linear time-invariant systems subjected to non-stationary earthquake excitations. It presents simulations applying these filters to identify parameters of a single-degree-of-freedom oscillator and a 3-story shear building model using synthetic earthquake data. The performance of different filters and resampling algorithms are compared based on identified natural frequencies and parameter convergence.
Deep learning models can be improved for physical processes by incorporating prior scientific knowledge. The paper proposes a method where a neural network predicts parameters like motion fields in governing equations, rather than directly predicting outputs. It applies this to ocean surface temperature prediction. The model predicts a motion field from past temperature images using a CNN. It then uses this motion field in a warping scheme based on the advection-diffusion equation to forecast future temperature. This outperforms comparison methods by leveraging physics knowledge without requiring manual specification of equations.
This document is a project report submitted for the degree of Bachelor of Technology in Civil Engineering at IIT Guwahati. It investigates using sequential Markov Chain Monte Carlo (MCMC) simulation based algorithms, also known as particle filters, for parameter estimation of linear time-invariant (LTI) structural systems subjected to non-stationary earthquake excitations. The report describes implementing Sequential Importance Sampling (SIS), Sequential Importance Resampling (SIR), and Bootstrap filters to identify stiffness and damping parameters of a 3-story shear building model and a multi-story reinforced concrete framed building (BRNS building) at IIT Guwahati. It compares the performance of these filters based on identified natural frequencies and
This document summarizes a research paper that estimates the scale parameter of the Nakagami distribution using Bayesian methods. The paper derives the posterior distributions of the scale parameter under different prior distributions, including uniform, inverse exponential, and Levy priors. It then finds the Bayesian estimators of the scale parameter under three loss functions: squared error loss function, quadratic loss function, and precautionary loss function. The paper uses Monte Carlo simulations to compare the performance of the different estimators.
Extended Kalman observer based sensor fault detectionIJECEIAES
This article discusses the Kalman observer based fault detection approach. The calculation of the residues can detect faults, but if there are noises, uncertainties become very important. To reduce the influence of these noises, a calculation of the instantaneous energy of the residues gave a better precision. The Kalman observer was used to estimate system performance and eliminate unknown noise and external disturbances. Instantaneous Power Calculation (IPCFD) based fault detection can detect potential sensor faults in hybrid systems. The effectiveness of the proposed approach is illustrated by the main application.
Shunt Faults Detection on Transmission Line by Waveletpaperpublications3
Abstract: Transmission line fault detection is a very important task because major portion of power system fault occurring in transmission system. This paper represents a fast and reliable method of transmission line shunt fault detection. MATLAB Simulink use for modeled an IEEE 9-bus test power system for case study of various faults. In proposed work Daubechies wavelet is applied for decomposition of fault transients. The application of wavelet analysis helps in accurate classification of the various fault patterns. Wavelet entropy measure based on wavelet analysis is able to observe the unsteady signals and complexity of the system at time-frequency plane.
The result shows that the proposed method is capable to detect all the shunt faults.
Using the Componentwise Metropolis-Hastings Algorithm to Sample from the Join...Thomas Templin
Markov Chain Monte Carlo (MCMC) methods provide a way to sample from a distribution (e.g., the joint posterior distribution for the parameters of a Bayesian model). These methods are useful when analytic solutions for parameter estimations do not exist. If the Markov chain is long, the sampled random variables are (approximately) identically distributed, but they are not independent because in a Markov chain each random variable depends on the previous one. However, because the Ergodic Theorem applies to MCMC methods, the chains converge (with probability one) to the stationary distribution, which for our purposes is the Bayesian joint posterior distribution.
MCMC methods are frequently implemented using a Gibbs sampler. This, however, requires knowledge of the parameters' conditional distributions, which are frequently not available. In this case, another MCMC method, called the Metropolis-Hastings algorithm, can be used. The Metropolis-Hastings algorithm is a type of acceptance/rejection method. It requires a candidate-generating distribution, also called proposal distribution. Ideally, the proposal distribution should be similar to the posterior distribution, but any distribution with the same support as the posterior is possible.
The Metropolis-Hastings algorithm generalizes to multidimensional distributions. In the multidimensional case, there are two types of algorithms ― the "regular" algorithm and the "componentwise" algorithm. Whereas the "regular" algorithm computes a full proposal vector at each step, the "componentwise" algorithm, which is implemented here for a binomial regression model, updates each component at a time, so that the proposals for all the components are evaluated, i.e., accepted or rejected, in turn.
This document discusses using an observability index to decompose the Kalman filter into two filters applied sequentially: 1) A filter estimating the transitional process caused by uncertainty in initial conditions, which treats the system as deterministic. 2) A filter estimating the steady state that treats the system as stochastic. The observability index measures observability as a signal-to-noise ratio to evaluate how long it takes to estimate states in the presence of noise. This decomposition simplifies filter implementation and reduces computational requirements by restricting estimated states and dividing the observation period into transitional and steady state estimation.
This document provides a tutorial on the RESTART rare event simulation technique. RESTART introduces nested sets of states (C1 to CM) defined by importance thresholds. When the process enters a set Ci, Ri simulation retrials are performed within that set until exiting. This oversamples high importance regions to estimate rare event probabilities more efficiently than crude simulation. The document analyzes RESTART's efficiency, deriving formulas for the variance of its estimator and gain over crude simulation. Guidelines are also provided for choosing an optimal importance function to maximize efficiency. Examples applying the guidelines to queueing networks and reliable systems are presented.
Computational Complexity Comparison Of Multi-Sensor Single Target Data Fusion...ijccmsjournal
This document compares the computational complexity of four multi-sensor data fusion methods based on the Kalman filter using MATLAB simulations. The four methods are: group-sensor method, sequential-sensor method, inverse covariance form, and track-to-track fusion. The results show that the inverse covariance method has the best computational performance if the number of sensors is above 20. For fewer sensors, other methods like the group sensors method are more appropriate due to lower computational loads when inverting smaller matrices.
COMPUTATIONAL COMPLEXITY COMPARISON OF MULTI-SENSOR SINGLE TARGET DATA FUSION...ijccmsjournal
Target tracking using observations from multiple sensors can achieve better estimation performance than a single sensor. The most famous estimation tool in target tracking is Kalman filter. There are several mathematical approaches to combine the observations of multiple sensors by use of Kalman filter. An
important issue in applying a proper approach is computational complexity. In this paper, four data fusion algorithms based on Kalman filter are considered including three centralized and one decentralized methods. Using MATLAB, computational loads of these methods are compared while number of sensors
increases. The results show that inverse covariance method has the best computational performance if the number of sensors is above 20. For a smaller number of sensors, other methods, especially group sensors, are more appropriate..
Investigation of repeated blasts at Aitik mine using waveform cross correlationIvan Kitov
We present results of signal detection from repeated events at the Aitik and Kiruna mines in Sweden as based on waveform cross correlation. Several advanced methods based on tensor Singular Value Decomposition is applied to waveforms measured at seismic array ARCES, which consists of three-component sensors.
Further results on the joint time delay and frequency estimation without eige...IJCNCJournal
Joint Time Delay and Frequency Estimation (JTDFE) problem of complex sinusoidal signals received at
two separated sensors is an attractive problem that has been considered for several engineering
applications. In this paper, a high resolution null (noise) subspace method without eigenvalue
decomposition is proposed. The direct data Matrix is replaced by an upper triangular matrix obtained from
Rank-Revealing LU (RRLU) factorization. The RRLU provides accurate information about the rank and the
numerical null space which make it a valuable tool in numerical linear algebra.The proposed novel method
decreases the computational complexity of JTDFE approximately to the half compared with RRQR
methods. The proposed method generates estimates of the unknown parameters which are based on the
observation and/or covariance matrices. This leads to a significant improvement in the computational load.
Computer simulations are included in this paper to demonstrate the proposed method.
Chaos Suppression and Stabilization of Generalized Liu Chaotic Control Systemijtsrd
In this paper, the concept of generalized stabilization for nonlinear systems is introduced and the stabilization of the generalized Liu chaotic control system is explored. Based on the time-domain approach with differential inequalities, a suitable control is presented such that the generalized stabilization for a class of Liu chaotic system can be achieved. Meanwhile, not only the guaranteed exponential convergence rate can be arbitrarily pre-specified but also the critical time can be correctly estimated. Finally, some numerical simulations are given to demonstrate the feasibility and effectiveness of the obtained results. Yeong-Jeu Sun | Jer-Guang Hsieh "Chaos Suppression and Stabilization of Generalized Liu Chaotic Control System" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-1 , December 2018, URL: http://www.ijtsrd.com/papers/ijtsrd20195.pdf
http://www.ijtsrd.com/engineering/electrical-engineering/20195/chaos-suppression-and-stabilization-of-generalized-liu-chaotic-control-system/yeong-jeu-sun
This document describes the construction and selection of single sampling quick switching variables systems for given control limits that involve minimum sum of risks. It provides the procedure for finding the single sampling quick switching variables system that has the minimum sum of producer's and consumer's risk for a specified acceptable quality level and limiting quality level. A table is constructed that can be used to select a quick switching variables sampling system for given values of AQL and LQL that has the minimum sum of risks. The document also discusses how to design a quick switching variables sampling system with an unknown standard deviation that involves minimum sum of risks.
Ill-posedness formulation of the emission source localization in the radio- d...Ahmed Ammar Rebai PhD
To contact the authors : tarek.salhi@gmail.com and ahmed.rebai2@gmail.com
In the field of radio detection in astroparticle physics, many studies have shown the strong dependence of the solution of the radio-transient sources localization problem (the radio-shower time of arrival on antennas) such solutions are purely numerical artifacts. Based on a detailed analysis of some already published results of radio-detection experiments like : CODALEMA 3 in France, AERA in Argentina and TREND in China, we demonstrate the ill-posed character of this problem in the sens of Hadamard. Two approaches have been used as the existence of solutions degeneration and the bad conditioning of the mathematical formulation problem. A comparison between experimental results and simulations have been made, to highlight the mathematical studies. Many properties of the non-linear least square function are discussed such as the configuration of the set of solutions and the bias.
Space-time adaptive processing (STAP) is a signal
processing technique most commonly used in radar systems where
interference is a problem. The radar signal processor is used to
remove the unintentional cluttering effects caused by ground
reflections and echoes due to sea, desert, forest, etc. and intentional
jamming and make the received signal useful. In this paper a new
approach to STAP based on subspace projection has been described
in detail. According to linear algebra and three dimensional
geometry, if we project a range space on to a subspace spanned by
linearly independent vectors, we can suppress data which is
perpendicular to that subspace. In subspace based technique, the
received data is projected on to a subspace which is orthogonal to
clutter subspace to remove the clutter. The probability of target
detection can be find out in order to analyse the performance of the
proposed algorithm. Two existing algorithms, SMI and DPCA are
chosen to do the comparison. while plotting the detection Probability
against SINR , the results obtained are better for subspace technique
than DPCA and SMI. We got the SINR improved for subspace based
technique for same detection probability. The effect of subspace rank
on SINR was also analysed for understanding the computational load
caused by the technique. We also analysed the convergence of the
algorithm by taking plots of SINR against range snapshots.
This document summarizes a research paper that proposes using a two-step sequential probability ratio test (SPRT) approach to analyze software reliability growth model (SRGM) data. Specifically, it applies the approach to the Half Logistic Software Reliability Growth Model (HLSRGM). The SPRT approach allows drawing conclusions about software reliability from sequential or continuous monitoring of failure data, potentially reaching conclusions more quickly than traditional hypothesis testing. Equations are provided for determining acceptance, rejection, and continuation regions based on comparing observed failure counts to lines derived from the HLSRGM mean value function. The approach is applied to five sets of existing software failure data to analyze results.
Constraints on the gluon PDF from top quark differential distributions at NNLOjuanrojochacon
- The document discusses constraints on the gluon PDF from top quark production at hadron colliders.
- It describes using the inclusive top quark pair production cross section to reduce uncertainties in the gluon PDF, especially in the large-x region between 0.1 and 0.5.
- Cross section ratios between different beam energies, such as 8 TeV/7 TeV, are highlighted as powerful precision tests that can discriminate between PDFs and probe BSM physics.
Recurrence Quantification Analysis :Tutorial & application to eye-movement dataDeb Aks
This document provides an overview of recurrence quantification analysis (RQA) and its application to analyzing eye movement data. RQA uses time series analysis techniques like phase space reconstruction to detect recurring patterns in complex systems. It was applied to study whether the recurring dynamics of eye movements can serve as a memory to sustain object tracking over time and during interruptions. The document reviews key concepts in RQA like delay coordinates, embedding dimension estimation, recurrence plots, and measures like determinism, laminarity, and trapping time. It includes examples of RQA applied to simulated sine waves and analyses the steps involved in conducting RQA on human eye tracking data.
Principal component analysis (PCA) is a technique used to simplify complex datasets. It works by converting a set of observations of possibly correlated variables into a set of linearly uncorrelated variables called principal components. PCA identifies patterns in data and expresses the data in such a way as to highlight their similarities and differences. The main implementations of PCA are eigenvalue decomposition and singular value decomposition. PCA is useful for data compression, reducing dimensionality for visualization and building predictive models. However, it works best for data that follows a multidimensional normal distribution.
The document presents John Alexander Vargas' research proposal for exploring the use of the utcc calculus with spatial logic as the underlying logic for modeling mobile properties of concurrent systems. The methodology involves defining a spatial constraint system, modeling a simple example using utcc and spatial constraints, verifying spatial properties, and modeling a complex multi-cellular system. Spatial logic is introduced for specifying properties of mobile systems, with logical inference rules and a sequent calculus for deciding validity presented.
Sparsity based Joint Direction-of-Arrival and Offset Frequency EstimatorJason Fernandes
- The document proposes a method to jointly estimate direction-of-arrival (DoA) and offset frequency of signals impinging on an antenna array using sparse representation.
- It builds on previous work by extending the estimation to include both spatial (DoA) and temporal (offset frequency) dimensions. This is done by constructing a joint dictionary as the Kronecker product of discrete spatial and temporal steering vector grids.
- Sparse recovery algorithms can then be applied to estimate the sparse coefficients and jointly infer the DoAs and offset frequencies of impinging signals from compressed measurements of the antenna array output over multiple time snapshots.
Another Adaptive Approach to Novelty Detection in Time Series csandit
This paper introduces a novel approach to novelty detection in time series data. The approach uses a neural network model to predict individual samples in a time series. Novelty is detected based on both the prediction error and the changes to the neural network weights from gradient descent learning. The relationship between prediction error and weight changes is key to the approach. The method is demonstrated on both artificial and real ECG time series data, showing it can detect small perturbations in the data even when noise is present. The approach is computationally efficient and could be useful for online novelty detection applications.
A NEW METHOD OF SMALL-SIGNAL CALIBRATION BASED ON KALMAN FILTERijcseit
The basic principle of Kalman filter (KF) is introduced in this paper, based on which, it presents a new
method for high precision measurement of small-signal instead of the unreal direct one. We have designed a
method of multi-meter information infusion. With this method, we filter the measured value of a type of
special equipment and extract the optimal estimate for true value. Experimental results show that this
method can effectively eliminate the random error of the measurement process. The optimal estimate error
meets the basic requirements of conformity assessment, 3푈95 ≤ 푀푃퐸푉. This method can provide an
algorithm reference for the design of automatic calibration equipment.
This document discusses non-intrusive load monitoring (NILM), which is a process that uses a single sensor installed at the main electrical panel of a home to identify individual appliance energy usage without additional per-appliance sensors. It describes the general NILM framework, which includes data acquisition, feature extraction, load identification, and system training. Challenges of NILM include identifying low power appliances and handling new appliances not in the signature database. NILM research aims to improve accuracy by combining transient and steady-state appliance signatures and using unsupervised learning methods.
UMTS optimization with a solution was proposed by Praveen Singh. The document appears to discuss optimization of the Universal Mobile Telecommunications System (UMTS) and provides a proposed solution. The author and topic are identified in a single sentence mentioning Praveen Singh and UMTS optimization with a solution.
This document provides an analysis of common problems that can cause poor network coverage in GSM networks. It discusses issues related to engineering quality, network planning and optimization, non-Huawei devices, and equipment faults. The document also provides a troubleshooting process and analyzes typical cases to help engineers identify and resolve coverage problems.
This document discusses using an observability index to decompose the Kalman filter into two filters applied sequentially: 1) A filter estimating the transitional process caused by uncertainty in initial conditions, which treats the system as deterministic. 2) A filter estimating the steady state that treats the system as stochastic. The observability index measures observability as a signal-to-noise ratio to evaluate how long it takes to estimate states in the presence of noise. This decomposition simplifies filter implementation and reduces computational requirements by restricting estimated states and dividing the observation period into transitional and steady state estimation.
This document provides a tutorial on the RESTART rare event simulation technique. RESTART introduces nested sets of states (C1 to CM) defined by importance thresholds. When the process enters a set Ci, Ri simulation retrials are performed within that set until exiting. This oversamples high importance regions to estimate rare event probabilities more efficiently than crude simulation. The document analyzes RESTART's efficiency, deriving formulas for the variance of its estimator and gain over crude simulation. Guidelines are also provided for choosing an optimal importance function to maximize efficiency. Examples applying the guidelines to queueing networks and reliable systems are presented.
Computational Complexity Comparison Of Multi-Sensor Single Target Data Fusion...ijccmsjournal
This document compares the computational complexity of four multi-sensor data fusion methods based on the Kalman filter using MATLAB simulations. The four methods are: group-sensor method, sequential-sensor method, inverse covariance form, and track-to-track fusion. The results show that the inverse covariance method has the best computational performance if the number of sensors is above 20. For fewer sensors, other methods like the group sensors method are more appropriate due to lower computational loads when inverting smaller matrices.
COMPUTATIONAL COMPLEXITY COMPARISON OF MULTI-SENSOR SINGLE TARGET DATA FUSION...ijccmsjournal
Target tracking using observations from multiple sensors can achieve better estimation performance than a single sensor. The most famous estimation tool in target tracking is Kalman filter. There are several mathematical approaches to combine the observations of multiple sensors by use of Kalman filter. An
important issue in applying a proper approach is computational complexity. In this paper, four data fusion algorithms based on Kalman filter are considered including three centralized and one decentralized methods. Using MATLAB, computational loads of these methods are compared while number of sensors
increases. The results show that inverse covariance method has the best computational performance if the number of sensors is above 20. For a smaller number of sensors, other methods, especially group sensors, are more appropriate..
Investigation of repeated blasts at Aitik mine using waveform cross correlationIvan Kitov
We present results of signal detection from repeated events at the Aitik and Kiruna mines in Sweden as based on waveform cross correlation. Several advanced methods based on tensor Singular Value Decomposition is applied to waveforms measured at seismic array ARCES, which consists of three-component sensors.
Further results on the joint time delay and frequency estimation without eige...IJCNCJournal
Joint Time Delay and Frequency Estimation (JTDFE) problem of complex sinusoidal signals received at
two separated sensors is an attractive problem that has been considered for several engineering
applications. In this paper, a high resolution null (noise) subspace method without eigenvalue
decomposition is proposed. The direct data Matrix is replaced by an upper triangular matrix obtained from
Rank-Revealing LU (RRLU) factorization. The RRLU provides accurate information about the rank and the
numerical null space which make it a valuable tool in numerical linear algebra.The proposed novel method
decreases the computational complexity of JTDFE approximately to the half compared with RRQR
methods. The proposed method generates estimates of the unknown parameters which are based on the
observation and/or covariance matrices. This leads to a significant improvement in the computational load.
Computer simulations are included in this paper to demonstrate the proposed method.
Chaos Suppression and Stabilization of Generalized Liu Chaotic Control Systemijtsrd
In this paper, the concept of generalized stabilization for nonlinear systems is introduced and the stabilization of the generalized Liu chaotic control system is explored. Based on the time-domain approach with differential inequalities, a suitable control is presented such that the generalized stabilization for a class of Liu chaotic system can be achieved. Meanwhile, not only the guaranteed exponential convergence rate can be arbitrarily pre-specified but also the critical time can be correctly estimated. Finally, some numerical simulations are given to demonstrate the feasibility and effectiveness of the obtained results. Yeong-Jeu Sun | Jer-Guang Hsieh "Chaos Suppression and Stabilization of Generalized Liu Chaotic Control System" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-1 , December 2018, URL: http://www.ijtsrd.com/papers/ijtsrd20195.pdf
http://www.ijtsrd.com/engineering/electrical-engineering/20195/chaos-suppression-and-stabilization-of-generalized-liu-chaotic-control-system/yeong-jeu-sun
This document describes the construction and selection of single sampling quick switching variables systems for given control limits that involve minimum sum of risks. It provides the procedure for finding the single sampling quick switching variables system that has the minimum sum of producer's and consumer's risk for a specified acceptable quality level and limiting quality level. A table is constructed that can be used to select a quick switching variables sampling system for given values of AQL and LQL that has the minimum sum of risks. The document also discusses how to design a quick switching variables sampling system with an unknown standard deviation that involves minimum sum of risks.
Ill-posedness formulation of the emission source localization in the radio- d...Ahmed Ammar Rebai PhD
To contact the authors : tarek.salhi@gmail.com and ahmed.rebai2@gmail.com
In the field of radio detection in astroparticle physics, many studies have shown the strong dependence of the solution of the radio-transient sources localization problem (the radio-shower time of arrival on antennas) such solutions are purely numerical artifacts. Based on a detailed analysis of some already published results of radio-detection experiments like : CODALEMA 3 in France, AERA in Argentina and TREND in China, we demonstrate the ill-posed character of this problem in the sens of Hadamard. Two approaches have been used as the existence of solutions degeneration and the bad conditioning of the mathematical formulation problem. A comparison between experimental results and simulations have been made, to highlight the mathematical studies. Many properties of the non-linear least square function are discussed such as the configuration of the set of solutions and the bias.
Space-time adaptive processing (STAP) is a signal
processing technique most commonly used in radar systems where
interference is a problem. The radar signal processor is used to
remove the unintentional cluttering effects caused by ground
reflections and echoes due to sea, desert, forest, etc. and intentional
jamming and make the received signal useful. In this paper a new
approach to STAP based on subspace projection has been described
in detail. According to linear algebra and three dimensional
geometry, if we project a range space on to a subspace spanned by
linearly independent vectors, we can suppress data which is
perpendicular to that subspace. In subspace based technique, the
received data is projected on to a subspace which is orthogonal to
clutter subspace to remove the clutter. The probability of target
detection can be find out in order to analyse the performance of the
proposed algorithm. Two existing algorithms, SMI and DPCA are
chosen to do the comparison. while plotting the detection Probability
against SINR , the results obtained are better for subspace technique
than DPCA and SMI. We got the SINR improved for subspace based
technique for same detection probability. The effect of subspace rank
on SINR was also analysed for understanding the computational load
caused by the technique. We also analysed the convergence of the
algorithm by taking plots of SINR against range snapshots.
This document summarizes a research paper that proposes using a two-step sequential probability ratio test (SPRT) approach to analyze software reliability growth model (SRGM) data. Specifically, it applies the approach to the Half Logistic Software Reliability Growth Model (HLSRGM). The SPRT approach allows drawing conclusions about software reliability from sequential or continuous monitoring of failure data, potentially reaching conclusions more quickly than traditional hypothesis testing. Equations are provided for determining acceptance, rejection, and continuation regions based on comparing observed failure counts to lines derived from the HLSRGM mean value function. The approach is applied to five sets of existing software failure data to analyze results.
Constraints on the gluon PDF from top quark differential distributions at NNLOjuanrojochacon
- The document discusses constraints on the gluon PDF from top quark production at hadron colliders.
- It describes using the inclusive top quark pair production cross section to reduce uncertainties in the gluon PDF, especially in the large-x region between 0.1 and 0.5.
- Cross section ratios between different beam energies, such as 8 TeV/7 TeV, are highlighted as powerful precision tests that can discriminate between PDFs and probe BSM physics.
Recurrence Quantification Analysis :Tutorial & application to eye-movement dataDeb Aks
This document provides an overview of recurrence quantification analysis (RQA) and its application to analyzing eye movement data. RQA uses time series analysis techniques like phase space reconstruction to detect recurring patterns in complex systems. It was applied to study whether the recurring dynamics of eye movements can serve as a memory to sustain object tracking over time and during interruptions. The document reviews key concepts in RQA like delay coordinates, embedding dimension estimation, recurrence plots, and measures like determinism, laminarity, and trapping time. It includes examples of RQA applied to simulated sine waves and analyses the steps involved in conducting RQA on human eye tracking data.
Principal component analysis (PCA) is a technique used to simplify complex datasets. It works by converting a set of observations of possibly correlated variables into a set of linearly uncorrelated variables called principal components. PCA identifies patterns in data and expresses the data in such a way as to highlight their similarities and differences. The main implementations of PCA are eigenvalue decomposition and singular value decomposition. PCA is useful for data compression, reducing dimensionality for visualization and building predictive models. However, it works best for data that follows a multidimensional normal distribution.
The document presents John Alexander Vargas' research proposal for exploring the use of the utcc calculus with spatial logic as the underlying logic for modeling mobile properties of concurrent systems. The methodology involves defining a spatial constraint system, modeling a simple example using utcc and spatial constraints, verifying spatial properties, and modeling a complex multi-cellular system. Spatial logic is introduced for specifying properties of mobile systems, with logical inference rules and a sequent calculus for deciding validity presented.
Sparsity based Joint Direction-of-Arrival and Offset Frequency EstimatorJason Fernandes
- The document proposes a method to jointly estimate direction-of-arrival (DoA) and offset frequency of signals impinging on an antenna array using sparse representation.
- It builds on previous work by extending the estimation to include both spatial (DoA) and temporal (offset frequency) dimensions. This is done by constructing a joint dictionary as the Kronecker product of discrete spatial and temporal steering vector grids.
- Sparse recovery algorithms can then be applied to estimate the sparse coefficients and jointly infer the DoAs and offset frequencies of impinging signals from compressed measurements of the antenna array output over multiple time snapshots.
Another Adaptive Approach to Novelty Detection in Time Series csandit
This paper introduces a novel approach to novelty detection in time series data. The approach uses a neural network model to predict individual samples in a time series. Novelty is detected based on both the prediction error and the changes to the neural network weights from gradient descent learning. The relationship between prediction error and weight changes is key to the approach. The method is demonstrated on both artificial and real ECG time series data, showing it can detect small perturbations in the data even when noise is present. The approach is computationally efficient and could be useful for online novelty detection applications.
A NEW METHOD OF SMALL-SIGNAL CALIBRATION BASED ON KALMAN FILTERijcseit
The basic principle of Kalman filter (KF) is introduced in this paper, based on which, it presents a new
method for high precision measurement of small-signal instead of the unreal direct one. We have designed a
method of multi-meter information infusion. With this method, we filter the measured value of a type of
special equipment and extract the optimal estimate for true value. Experimental results show that this
method can effectively eliminate the random error of the measurement process. The optimal estimate error
meets the basic requirements of conformity assessment, 3푈95 ≤ 푀푃퐸푉. This method can provide an
algorithm reference for the design of automatic calibration equipment.
This document discusses non-intrusive load monitoring (NILM), which is a process that uses a single sensor installed at the main electrical panel of a home to identify individual appliance energy usage without additional per-appliance sensors. It describes the general NILM framework, which includes data acquisition, feature extraction, load identification, and system training. Challenges of NILM include identifying low power appliances and handling new appliances not in the signature database. NILM research aims to improve accuracy by combining transient and steady-state appliance signatures and using unsupervised learning methods.
UMTS optimization with a solution was proposed by Praveen Singh. The document appears to discuss optimization of the Universal Mobile Telecommunications System (UMTS) and provides a proposed solution. The author and topic are identified in a single sentence mentioning Praveen Singh and UMTS optimization with a solution.
This document provides an analysis of common problems that can cause poor network coverage in GSM networks. It discusses issues related to engineering quality, network planning and optimization, non-Huawei devices, and equipment faults. The document also provides a troubleshooting process and analyzes typical cases to help engineers identify and resolve coverage problems.
This document discusses key concepts in telecommunications network planning and traffic engineering. It covers:
- Types of random processes used to model network usage patterns like call arrival rates and durations.
- How traffic engineering balances factors like grade of service, resources, blocking vs. delay systems based on traffic amounts.
- Key metrics like erlangs, traffic intensity, busy hour, traffic volume that are used to quantify network usage and demand.
- Concepts like grade of service, blocking probability, and how they measure network performance during busy periods.
This document discusses key performance indicators (KPIs) for evaluating 3G networks. It describes various KPIs for measuring accessibility, retainability, mobility, coverage, service integrity, availability, and traffic. Formulas for calculating several KPIs are provided. Troubleshooting methods and examples are given for accessibility, retainability, and mobility-related issues. Sample daily reports and the Gsmart optimization tool interface are also shown.
This document discusses capacity planning for GSM networks. It covers topics like trunking, traffic theory including traffic intensity, grade of service, busy hour, and request rate. It describes how to dimension traffic channels and SDCCH channels based on factors like traffic intensity and grade of service. It also discusses connectivity planning between network elements like MSC, BSC, transcoder, and BTS. It provides details on air interface, Abis interface between BSC and BTS, and different LAPD modes for signaling concentration over Abis. The objective is to estimate the optimal number of resources needed to meet performance requirements based on traffic analysis and engineering principles.
This document discusses radio frequency (RF) planning for cellular networks. It addresses the key aspects of RF planning including:
1) Providing adequate coverage and capacity while using spectrum efficiently and minimizing the number of cell sites.
2) Conducting a planning process that involves inputs from customers, coverage and capacity planning, parameter planning, and optimization.
3) Setting objectives for coverage, capacity, network growth, and cost-effective design.
This document provides an overview of dimensioning a GSM/GPRS network. It defines key terms like Erlang units which measure traffic intensity, and describes traffic models which define parameters like call arrival rates and durations. The dimensioning procedure calculates the number of radio resources, TRXs, and equipment needed based on these models and blocking probability targets. Dimensioning ensures the network has sufficient capacity for the predicted traffic load.
This document discusses the process of optimizing a 3G radio network. It covers the various phases of network optimization including single site verification, RF optimization, service testing and parameter optimization, and regular reference route testing. It then provides details on RF optimization including preparation, targets, solutions, and the analysis of drive test data to identify issues and determine required changes. Examples are also given of antenna adjustment, drop call analysis, and neighbor list verification.
Traffic analysis and characterization is used to analyze communication networks and model their performance. It applies probability theory to understand the random nature of network traffic, which arises from random call arrivals and holding times. Key aspects of traffic that are analyzed include arrival rates, holding times, destination distributions, user behavior, occupancy levels, busy hour patterns, and call completion rates. Units of traffic like Erlangs and Cent Call Seconds are used to measure traffic intensity. Models like negative exponential and Poisson distributions help characterize random inter-arrival times.
This document provides an overview of frequency planning in cellular networks. It discusses key concepts like frequency reuse, co-channel interference, system capacity, and design criteria. An optimal frequency plan requires minimizing interference between co-channel and adjacent channel cells. Frequency planning involves dividing the available spectrum into channels and allocating different sets of channels to nearby base stations to reduce interference. The document also provides examples of calculating cluster size and frequency allocation patterns.
This document discusses cellular communication systems and the cellular concept. It introduces cellular networks as using multiple low-power transmitters and frequency reuse to improve spectrum efficiency and user capacity compared to single high-power transmitters. Key aspects covered include hexagonal cell shapes, frequency reuse patterns, cluster size calculations, co-channel interference management through channel assignment strategies, and an overview of the base station subsystem, network switching subsystem and their components.
The document introduces radio network optimization by discussing:
1) Typical problems like coverage issues, interference, unbalanced power budgets, and congestion.
2) The 4 stages of a mobile originated call establishment and location update process, as well as the handover process between cells.
3) Global indicators like call setup times help evaluate a radio network's performance and identify optimization opportunities.
DC-HSPA+ is a mobile broadband technology that aggregates two 5MHz carriers to provide download speeds of up to 42Mbps. It doubles the speed capabilities of HSPA+ by using dual carriers for the downlink, with one carrier acting as an anchor and the other as supplementary. This allows users to be served using two carriers simultaneously instead of one as in HSPA+, effectively doubling throughput. DC-HSPA+ also provides benefits like carrier load balancing and backward compatibility with earlier 3G releases. Real-world tests showed speeds were doubled compared to HSPA+ alone.
Understanding RF Fundamentals and the Radio Design of Wireless NetworksCisco Mobility
The document discusses an advanced session that focuses on understanding radio frequency fundamentals and design of wireless networks, covering topics like 802.11 radio hardware, antenna basics, interpreting antenna patterns, distributed antenna systems, survey tools, and lessons learned from challenging wireless deployments in various environments. The session aims to provide a deep-dive understanding of the radio frequency aspects of wireless LAN design and deployment that are often overlooked. Certain topics related to security, density, location services, and management will not be covered in this session.
COMPARISON OF WAVELET NETWORK AND LOGISTIC REGRESSION IN PREDICTING ENTERPRIS...ijcsit
Enterprise financial distress or failure includes bankruptcy prediction, financial distress, corporate performance prediction and credit risk estimation. The aim of this paper is that using wavelet networks innon-linear combination prediction to solve ARMA (Auto-Regressive and Moving Average) model problem.ARMA model need estimate the value of all parameters in the model, it has a large amount of computation.Under this aim, the paper provides an extensive review of Wavelet networks and Logistic regression. Itdiscussed the Wavelet neural network structure, Wavelet network model training algorithm, Accuracy rateand error rate (accuracy of classification, Type I error, and Type II error). The main research opportunity exist a proposed of business failure prediction model (wavelet network model and logistic regression
model). The empirical research which is comparison of Wavelet Network and Logistic Regression on training and forecasting sample, the result shows that this wavelet network model is high accurate and the overall prediction accuracy, Type Ⅰerror and Type Ⅱ error, wavelet networks model is better thanlogistic regression model.
Cellular wireless systems like GSM suffer from congestion resulting in overall system degradation and poor service delivery. When the traffic demand in a geographical area is high, the input traffic rate will exceed thecapacity of the output lines. This work focused on homogenous wireless network (the network traffic and resource dimensioning that are statistically identical) such that the network performance
evaluation can be reduced to a system with single cell and a single traffic type. Such system can employa queuing model to evaluate the performance metric of a cell in terms of blocking probability.
Five congestion control models were compared in the work to ascertain their peculiarities, they are Erlang B, Erlang C, Engset (cleared), Engset (buffered), and Bernoulli. To analyze the system, an aggregate onedimensional Markov chain wasderived, such that it describes a call arrival process under the assumption
that it is Poisson distributed. The models were simulated and their results show varying performances, however the Bernoulli model (Pb5) tends to show a situation that allows more users access to the system and the congestion level remain unaffected despite increase in the number of users and the offered traffic into the system.
This document proposes a stochastic modeling approach to analyze the time-domain variability of general linear systems with uncertain parameters. It uses a polynomial chaos expansion of the scattering parameters to build an "augmented system" described by a deterministic matrix. The Galerkin projection method is used to relate the polynomial chaos coefficients of the input/output port signals. A Vector Fitting algorithm then generates a stable and passive state-space model of the augmented system. This allows time-domain variability analysis to be performed with one simulation, demonstrating computational efficiency over conventional Monte Carlo methods. The approach is validated on a microstrip bandstop filter with random width and permittivity parameters.
The document proposes a stochastic modeling approach to analyze the time-domain variability of general linear systems with uncertain parameters. It uses a polynomial chaos expansion of the scattering parameters to build an "augmented system" that relates the polynomial chaos coefficients of the input and output port signals. A vector fitting algorithm is then used to obtain a stable and passive state-space model of the augmented system. This provides an efficient way to perform time-domain variability analysis with one simulation, avoiding the computational cost of Monte Carlo analysis which requires many simulations. The approach is demonstrated on a microstrip bandstop filter example.
TEST GENERATION FOR ANALOG AND MIXED-SIGNAL CIRCUITS USING HYBRID SYSTEM MODELSVLSICS Design
In this paper we propose an approach for testing time-domain properties of analog and mixed-signal circuits. The approach is based on an adaptation of a recently developed test generation technique for hybrid systems and a new concept of coverage for such systems. The approach is illustrated by its application to some benchmark circuits.
Test Generation for Analog and Mixed-Signal Circuits Using Hybrid System Mode...VLSICS Design
In this paper we propose an approach for testing time-domain properties of analog and mixed-signal circuits. The approach is based on an adaptation of a recently developed test generation technique for hybrid systems and a new concept of coverage for such systems. The approach is illustrated by its application to some benchmark circuits.
Foundation and Synchronization of the Dynamic Output Dual Systemsijtsrd
In this paper, the synchronization problem of the dynamic output dual systems is firstly introduced and investigated. Based on the time domain approach, the state variables synchronization of such dual systems can be verified. Meanwhile, the guaranteed exponential convergence rate can be accurately estimated. Finally, some numerical simulations are provided to illustrate the feasibility and effectiveness of the obtained result. Yeong-Jeu Sun "Foundation and Synchronization of the Dynamic Output Dual Systems" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-6 , October 2019, URL: https://www.ijtsrd.com/papers/ijtsrd29256.pdf Paper URL: https://www.ijtsrd.com/engineering/electrical-engineering/29256/foundation-and-synchronization-of-the-dynamic-output-dual-systems/yeong-jeu-sun
Analysis of intelligent system design by neuro adaptive control no restrictioniaemedu
This document discusses using neuro-adaptive control to analyze the design of intelligent systems. It begins by introducing the topic and noting that conventional adaptive control techniques assume explicit system models or dynamic structures based on linear models, which may not be valid for complex nonlinear systems. Neural networks and other intelligent control approaches that do not require explicit mathematical modeling are presented as alternatives. The paper then focuses on using time-delay neural networks for system identification and control of nonlinear dynamic systems. Various neural network architectures and learning algorithms for system modeling and control are described.
Analysis of intelligent system design by neuro adaptive controliaemedu
This document summarizes the analysis of intelligent system design using neuro-adaptive control methods. It discusses using neural networks for system identification through series-parallel and parallel models. It also discusses supervised control using a neural network trained by an expert operator, inverse control using a neural network trained on the inverse system model, and neuro-adaptive control using two neural networks - one for system identification and one for control. Neuro-adaptive control allows handling nonlinear system behavior without linear approximations.
Erca energy efficient routing and reclusteringaciijournal
The pervasive application of wireless sensor networks (WNSs) is challenged by the scarce energy constraints of sensor nodes. En-route filtering schemes, especially commutative cipher based en-route filtering (CCEF) can saves energy with better filtering capacity. However, this approach suffer from fixed paths and inefficient underlying routing designed for ad-hoc networks. Moreover, with decrease in remaining sensor nodes, the probability of network partition increases. In this paper, we propose energy-efficient routing and re-clustering algorithm (ERCA) to address these limitations. In proposed scheme with reduction in the number of sensor nodes to certain thresh-hold the cluster size and transmission range dynamically maintain cluster node-density. Performance results show that our approach demonstrate filtering-power, better energy-efficiency, and an average gain over 285% in network lifetime.
Multivariate dimensionality reduction in cross-correlation analysis ivanokitov
1. Dimensionality reduction techniques like PCA can be used to optimize master event templates for cross-correlation based seismic event detection and location. 2. The document explores using various dimensionality reduction methods such as PCA, IPCA, and SSD on both real and synthetic seismic data to minimize the number of templates needed. 3. Representing seismic data as hypercomplex numbers or tensors can allow dimensionality reduction techniques to utilize the full multidimensional information from seismic arrays for improved master event design.
ERCA: Energy-Efficient Routing and Reclustering Algorithm for Cceftoextend Ne...aciijournal
The pervasive application of wireless sensor networks (WNSs) is challenged by the scarce energy constraints
of sensor nodes. En-route filtering schemes, especially commutative cipher based en-route filtering (CCEF)
can saves energy with better filtering capacity. However, this approach suffer from fixed paths and inefficient
underlying routing designed for ad-hoc networks. Moreover, with decrease in remaining sensor nodes, the
probability of network partition increases. In this paper, we propose energy-efficient routing and re-clustering
algorithm (ERCA) to address these limitations. In proposed scheme with reduction in the number of sensor
nodes to certain thresh-hold the cluster size and transmission range dynamically maintain cluster node-density.
Performance results show that our approach demonstrate filtering-power, better energy-efficiency, and an
average gain over 285% in network lifetime.
Neural Networks for High Performance Time-Delay Estimation and Acoustic Sourc...cscpconf
Time-delay estimation is an essential building block of many signal processing applications.
This paper follows up on earlier work for acoustic source localization and time delay estimation
using pattern recognition techniques in the adverse environment such as reverberant rooms or
underwater; it presents unprecedented high performance results obtained with supervised
training of neural networks which challenge the state of the art and compares its performance
to that of well-known methods such as the Generalized Cross-Correlation or Adaptive
Eigenvalue Decomposition.
NEURAL NETWORKS FOR HIGH PERFORMANCE TIME-DELAY ESTIMATION AND ACOUSTIC SOURC...csandit
Time-delay estimation is an essential building block of many signal processing applications.This paper follows up on earlier work for acoustic source localization and time delay estimation
using pattern recognition techniques in the adverse environment such as reverberant rooms or underwater; it presents unprecedented high performance results obtained with supervised training of neural networks which challenge the state of the art and compares its performance to that of well-known methods such as the Generalized Cross-Correlation or Adaptive Eigenvalue Decomposition.
COMPUTATIONAL COMPLEXITY COMPARISON OF MULTI-SENSOR SINGLE TARGET DATA FUSION...ijccmsjournal
Target tracking using observations from multiple sensors can achieve better estimation performance than a single sensor. The most famous estimation tool in target tracking is Kalman filter. There are several mathematical approaches to combine the observations of multiple sensors by use of Kalman filter. An important issue in applying a proper approach is computational complexity. In this paper, four data fusion algorithms based on Kalman filter are considered including three centralized and one decentralized methods. Using MATLAB, computational loads of these methods are compared while number of sensors increases. The results show that inverse covariance method has the best computational performance if the number of sensors is above 20. For a smaller number of sensors, other methods, especially group sensors, are more appropriate..
Activity Recognition From IR Images Using Fuzzy Clustering TechniquesIJTET Journal
Infrared sensors ensures that activity recognition is possible in the day and night times. It is used especially for activity monitoring of older adults as falls are more prevalent at night than the day. This paper focus on an application of fuzzy set techniques and it is capable of accurately detecting several different activity states related to fall detection and fall risk assessment and it also includes sitting, standing and being on the floor to ensure that elderly residents gets the help they need quickly in case of emergencies. Fall detection and fall risk assessment is used for an aging in place facility for the elderly people. It describes the silhouette extraction process, the image features , and the fuzzy clustering technique.
This document describes specification tests that can be used after estimating dynamic panel data models using the generalized method of moments (GMM) estimator. It presents GMM estimators for first-order autoregressive models with individual fixed effects that exploit moment restrictions from assuming serially uncorrelated errors. Monte Carlo simulations are used to evaluate the small-sample performance of tests of serial correlation based on GMM residuals, Sargan tests, and Hausman tests. The tests are also applied to estimated employment equations using an unbalanced panel of UK firms.
1) The document presents a mathematical model for finding the maximum value of growth hormone deficiency in patients with fibromyalgia using Gaussian process.
2) It summarizes evidence that growth hormone deficiency, as defined by low insulin-like growth factor levels, occurs in approximately 30% of fibromyalgia patients and is likely responsible for some morbidity.
3) The model fits growth hormone deficiency data over time using a Gaussian process, allowing determination of the maximum deficiency value, which can help medical professionals in treatment.
A Novel Design Architecture of Secure Communication System with Reduced-Order...ijtsrd
In this paper, a new concept about secure communication system is introduced and a novel secure communication design with reduced-order linear receiver is developed to guarantee the global exponential stability of the resulting error signals. Besides, the guaranteed exponential convergence rate of the proposed secure communication system can be correctly calculated. Finally, some numerical simulations are given to demonstrate the feasibility and effectiveness of the obtained results. Yeong-Jeu Sun "A Novel Design Architecture of Secure Communication System with Reduced-Order Linear Receiver" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-1 , December 2018, URL: http://www.ijtsrd.com/papers/ijtsrd20212.pdf
http://www.ijtsrd.com/engineering/electrical-engineering/20212/a-novel-design-architecture-of-secure-communication-system-with-reduced-order-linear-receiver/yeong-jeu-sun
An Algorithm For Vector Quantizer DesignAngie Miller
The document presents an algorithm for designing vector quantizers. The algorithm is efficient, intuitive, and can be used for quantizers with general distortion measures and large block lengths. It is based on Lloyd's approach but does not require differentiation, making it applicable even when the data distribution has discrete components. The algorithm finds quantizers that meet necessary optimality conditions. Examples show it converges well and finds near-optimal quantizers for memoryless Gaussian sources. It is also used successfully to quantize LPC speech parameters with a complicated distortion measure.
Similar to SEQUENTIAL CLUSTERING-BASED EVENT DETECTION FOR NONINTRUSIVE LOAD MONITORING (20)
Software Engineering and Project Management - Introduction, Modeling Concepts...Prakhyath Rai
Introduction, Modeling Concepts and Class Modeling: What is Object orientation? What is OO development? OO Themes; Evidence for usefulness of OO development; OO modeling history. Modeling
as Design technique: Modeling, abstraction, The Three models. Class Modeling: Object and Class Concept, Link and associations concepts, Generalization and Inheritance, A sample class model, Navigation of class models, and UML diagrams
Building the Analysis Models: Requirement Analysis, Analysis Model Approaches, Data modeling Concepts, Object Oriented Analysis, Scenario-Based Modeling, Flow-Oriented Modeling, class Based Modeling, Creating a Behavioral Model.
Applications of artificial Intelligence in Mechanical Engineering.pdfAtif Razi
Historically, mechanical engineering has relied heavily on human expertise and empirical methods to solve complex problems. With the introduction of computer-aided design (CAD) and finite element analysis (FEA), the field took its first steps towards digitization. These tools allowed engineers to simulate and analyze mechanical systems with greater accuracy and efficiency. However, the sheer volume of data generated by modern engineering systems and the increasing complexity of these systems have necessitated more advanced analytical tools, paving the way for AI.
AI offers the capability to process vast amounts of data, identify patterns, and make predictions with a level of speed and accuracy unattainable by traditional methods. This has profound implications for mechanical engineering, enabling more efficient design processes, predictive maintenance strategies, and optimized manufacturing operations. AI-driven tools can learn from historical data, adapt to new information, and continuously improve their performance, making them invaluable in tackling the multifaceted challenges of modern mechanical engineering.
Generative AI Use cases applications solutions and implementation.pdfmahaffeycheryld
Generative AI solutions encompass a range of capabilities from content creation to complex problem-solving across industries. Implementing generative AI involves identifying specific business needs, developing tailored AI models using techniques like GANs and VAEs, and integrating these models into existing workflows. Data quality and continuous model refinement are crucial for effective implementation. Businesses must also consider ethical implications and ensure transparency in AI decision-making. Generative AI's implementation aims to enhance efficiency, creativity, and innovation by leveraging autonomous generation and sophisticated learning algorithms to meet diverse business challenges.
https://www.leewayhertz.com/generative-ai-use-cases-and-applications/
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Sinan KOZAK
Sinan from the Delivery Hero mobile infrastructure engineering team shares a deep dive into performance acceleration with Gradle build cache optimizations. Sinan shares their journey into solving complex build-cache problems that affect Gradle builds. By understanding the challenges and solutions found in our journey, we aim to demonstrate the possibilities for faster builds. The case study reveals how overlapping outputs and cache misconfigurations led to significant increases in build times, especially as the project scaled up with numerous modules using Paparazzi tests. The journey from diagnosing to defeating cache issues offers invaluable lessons on maintaining cache integrity without sacrificing functionality.
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
Prediction of Electrical Energy Efficiency Using Information on Consumer's Ac...PriyankaKilaniya
Energy efficiency has been important since the latter part of the last century. The main object of this survey is to determine the energy efficiency knowledge among consumers. Two separate districts in Bangladesh are selected to conduct the survey on households and showrooms about the energy and seller also. The survey uses the data to find some regression equations from which it is easy to predict energy efficiency knowledge. The data is analyzed and calculated based on five important criteria. The initial target was to find some factors that help predict a person's energy efficiency knowledge. From the survey, it is found that the energy efficiency awareness among the people of our country is very low. Relationships between household energy use behaviors are estimated using a unique dataset of about 40 households and 20 showrooms in Bangladesh's Chapainawabganj and Bagerhat districts. Knowledge of energy consumption and energy efficiency technology options is found to be associated with household use of energy conservation practices. Household characteristics also influence household energy use behavior. Younger household cohorts are more likely to adopt energy-efficient technologies and energy conservation practices and place primary importance on energy saving for environmental reasons. Education also influences attitudes toward energy conservation in Bangladesh. Low-education households indicate they primarily save electricity for the environment while high-education households indicate they are motivated by environmental concerns.
Software Engineering and Project Management - Software Testing + Agile Method...Prakhyath Rai
Software Testing: A Strategic Approach to Software Testing, Strategic Issues, Test Strategies for Conventional Software, Test Strategies for Object -Oriented Software, Validation Testing, System Testing, The Art of Debugging.
Agile Methodology: Before Agile – Waterfall, Agile Development.
2. 78 Computer Science & Information Technology (CS & IT)
and accurate segmentation of such change-intervals is of particular importance for event-based
NILM systems.
Basseville and Nikiforov [6] described various detection algorithms from which two approaches
have been utilized in event-based NILM systems, namely the Generalized Likelihood Ratio
(GLR) test [7, 8] and the CUmulative SUM (CUSUM) filtering [9]. Jin et al. [10] proposed a
more robust change-point detection approach based on a Goodness-of-Fit (GoF) test. In addition,
various machine learning tools such as kernel clustering [11], Hidden Markov Models (HMM)
[12], and Support Vector Machines (SVMs) [13], have been proposed as solutions to address the
change point detection problem.
Even though many previous works on NILM proposed utilizing features extracted from the
transient intervals, only few event detection approaches consider accurate segmentation of the
transient periods for the extraction of more stable transient features [9, 14]. Moreover, many
approaches need a probabilistic model for the sample distribution in the stationary segments
which is often difficult to obtain from aggregate consumption profile of several, simultaneously
operating appliances. The result is that the current event detection algorithms are not robust and
fail sometimes provide reliable event-based feature for appliance recognition in practice. In this
paper, we propose a novel clustering-based event detection algorithm for event-based NILM
systems. In contrast to other event detection algorithms, the proposed approach features accurate
segmentation of the input signal into stationary (steady) and non-stationary (transient) segments.
Such accurate segmentation is crucial for the extraction of more stable and repeatable features
from both transient and steady-state intervals. Moreover, the utilized density-based clustering
scheme does not impose any probabilistic models on the sample distribution in either of the
stationary segments and supports arbitrarily shaped, weakly stationary segments leading to an
enhanced robustness to noise. In addition, the proposed algorithm features a sequential (instead of
batch) clustering that is more efficient for real-time NILM systems.
The presented approach is modular in the sense that it can combine any clustering-based event
detection algorithm with any event model. For this purpose, we also introduce different event
models at different complexity- and robustness-levels. This paper is organized as follows. In
section 2, we introduce different event models in the context of spatial and time-series clustering.
In section 3, we describe the proposed sequential event detection algorithm in which the Density-
Based Spatial Clustering for Applications with Noise [15] is assumed and utilized sequentially in
spatial and temporal analysis of the input power signals. Section 4 shows results of application of
the proposed algorithm on the publicly available, residential BLUED [16] dataset. Finally,
section 5 concludes this paper.
2. EVENT MODELS
Event models will be introduced in the order of their increasing coverage of real events,
robustness, and complexity.
Let the matrix
܆ = ሾ࢞ଵ, ࢞ଶ, … , ࢞ேሿ, ࢞ ∈ ℝ
(1)
contain a time series of ܰ consecutive ݈-dimensional data samples (feature vectors). Typically,
࢞ contains the measured real ܲ and reactive ܳ powers at time instance ݊. Assume that all ܰ
samples have been clustered into ݉ non-empty, disjoint clusters (sets) ܥଵ, ܥଶ, … , ܥ. In addition,
3. Computer Science & Information Technology (CS & IT) 79
we assume that a noise-aware clustering algorithm assigns un-clustered samples (i.e. outliers or
noisy samples) to the set ܥ. Clearly, ∑ |ܥ|
ୀ = ܰ where |ܥ| is the cardinality of the cluster
|ܥ|. Let
ݕ = ߱(࢞) ∈ ሼ0,1,2, … , ݉ሽ (2)
be the corresponding cluster index of ࢞ (i.e. ࢞ ∈ ܥ௬
). We then introduce the following
definitions for two metrics of a cluster and three different event models:
Definition 1: The temporal length Len(ܥ) of cluster ܥ is defined as the minimum window size
that contains all its elements. If
∃:ݑ ࢞௨ ∈ ܥ and ࢞ ∉ ܥ ∀(݊ < )ݑ (3)
∃:ݒ ࢞௩ ∈ ܥ and ࢞ ∉ ܥ ∀(݊ > )ݒ (4)
Then Len(ܥ) is defined as
Len(ܥ) = ݒ − ݑ + 1 ≥ |ܥ| (5)
Here ݑ and ݒ denote the time instances of the first and last samples belonging to ܥ, respectively.
Definition 2: The temporal locality ratio Loc(ܥ) of cluster ܥ is defined as
Loc(ܥ) =
|ܥ|
Len(ܥ)
∈ ሿ 0, 1 ሿ (6)
The temporal locality ratio is a measure of how a cluster is spreading over time domain. A value
of one (Loc(ܥ) = 1) refers to the maximum temporal locality where the cluster is represented by
a single segment of consecutive observations. This measure is utilized later in the event models
as a means to control the amount of noisy samples permitted in the stationary segments.
Event model ℳଵ: In this event model, a sequence of samples ܆ is defined as an event if
(a) it does not contain any noisy samples (i.e. ܥ = ߶),
(b) it contains two clusters ܥଵ and ܥଶ (i.e.݉ = 2),
(c) both clusters do not interleave (overlap) in the time domain1
,
(i.e. ∃ݑ ∶ ࢞ ∈ ܥଵ ∀(݊ ≤ )ݑ and ࢞ ∈ ܥଶ ∀(݊ > .))ݑ
This is the simplest event model without any outliers. It consists of two stationary segments
܆௦ଵ = ሾ࢞ଵ, ࢞ଶ, … , ࢞௨ሿ and ܆௦ଶ = ሾ࢞௨ାଵ, ࢞௨ାଶ, … , ࢞ேሿ. The segment ܆௧ = ሾ࢞௨, ࢞௨ାଵሿ (including
the last sample of ܆௦ଵ and the first one of ܆௦ଶ) is called the change-interval of the event and ݑ is
the change point. In other words, an event ℳଵ is a change interval of length two surrounded by
two noise-free weakly stationary segments. This model is valid for switch-off events of most
loads as well as switch-on events of resistive ones in a noise-free power signals.
1
For simplicity, and without loss of generality, we assume that the first and second stationary segments of an event are assigned to the cluster sets ܥଵand
ܥଶ, respectively.
4. 80 Computer Science & Information Technology (CS & IT)
Figure 1(a) shows an example of a signal segment matching the first event model ℳଵ where the
scalar samples ݔ ∈ ℝ and their corresponding cluster indices ݕ = ߱(ݔ) ∈ ሼ1, 2ሽ are plotted
over time. The signal represents a step-like event that consists of two stationary segments (red,
solid) and a change interval (blue, dashed).
Event model ℳଶ: A sequence of samples ܆ is defined as an event if
a) it contains two clusters ܥଵ and ܥଶ (i.e. ݉ = 2) and the outliers set ܥ is not necessarily
empty allowing noisy samples,
b) both clusters ܥଵ and ܥଶ show a high temporal locality ratio, i.e.
ܥ(ܿܮ) ≥ 1 − ߳, for ݅ = 1, 2
c) both clusters do not interleave in the time domain, i.e.
∃,ݑ ݒ > :ݑ ࢞ ∈ ܥ ∪ ܥଵ ∀(݊ < )ݑ and ࢞௨ ∈ ܥଵ, and
࢞௩ ∈ ܥ ∪ ܥଶ ∀(݊ > )ݒ and ࢞௩ ∈ ܥଶ
Compared with ℳଵ, this event model permits noisy samples (i.e. outliers) as well as a lengthy
transient interval. This, however, requires the utilization of a noise-aware clustering algorithm.
By definition, ࢞ ∈ ܥ, ∀(ݑ < ݊ < .)ݒ In this case, the event contains two stationary segments
܆௦ଵ and ܆௦ଶ consisting of samples belonging to ܥଵ and ܥଶ, respectively, and a change-interval
܆௧ = ሾ࢞௨, ࢞௨ାଵ, . . . , ࢞௩ିଵ, ࢞௩ሿ.
Figure 1: 1-dimensional signals highlighting differences between the three event models. (a) shows a step-like
event that is free of both outliers and a transient interval. In (b) random outliers as well as a transient interval
are permitted. (c) shows a repeated pattern of spikes that eventually cluster in ܥଷ. Finally, (d) shows high
fluctuations in stationary segments leading to the third cluster ܥଷ as well. The third event model ℳଷ fits all
segments, the second event model ℳଶ fits only (a) and (b), wherease the first model ℳଵ fits only (a).
5. Computer Science & Information Technology (CS & IT) 81
Figure 1(b) shows an example of a signal segment matching the second event model M_2 (but
not the first one ℳଵ) where the event contains a slower transient interval in a noisy signal. Even
though ℳଶ is valid for most of the switch-on/off and state-change events within noisy signals, it
actually has one implicit assumption on the noise. The assumption that ݉ = 2 (maximally two
clusters representing two stationary segments) implies that the noise is random and does not
contain a repeated pattern that eventually builds up a cluster when projected to the PQ-plane. This
is not always the case as shown in the third example in Figure 1(c).
In the aggregate power signal, some appliances trigger a repeated, sometimes periodic, pattern of
high fluctuations or spikes. Such repeated patterns tackle the detection of other actual events.
This masking behaviour is resolved in the third event model.
Event Model ℳଷ: A sequence of samples ܆ is defined as an event if
(a) it contains at least two clusters ܥଵ and ܥଶ (i.e. ݉ ≥ 2) and the outliers set ܥ is not
necessarily empty,
(b) clusters ܥଵ and ܥଶ show a high temporal locality ratio, i.e.
ܥ(ܿܮ) ≥ 1 − ߳, for ݅ = 1, 2
(c) clusters ܥଵ and ܥଶ do not interleave in the time domain, i.e.
∃,ݑ ݒ > :ݑ ࢞ ∉ ܥଵ ∀(݊ > )ݑ and ࢞௨ ∈ ܥଵ, and
࢞௩ ∉ ܥଶ ∀(݊ < )ݒ and ࢞௩ ∈ ܥଶ
In this model, the limitation on the clustering cardinality is released and therefore a repeated
noise pattern that eventually results in a wide (temporally wide) cluster would not mask events
occurring in the same interval. Similar to ℳଶ, the sequence in this model contains two stationary
segments ܆௦ଵ and ܆௦ଶ consisting of samples belonging to ܥଵ and ܥଶ respectively, and a change
interval consisting of ܆௧ = ሾ࢞௨, ࢞௨ାଵ, … , ࢞௩ିଵ, ࢞௩ሿ.
Figure 1 (b) and (c) show two event segments fit only by ℳଷ. Figure 1(a) shows the simplest
event which is fit by all defined models. In Figure 1(b), the transient period as well as the noisy
spikes can only be fit by ℳଶ and ℳଷ. Finally, the repeated noise patten in Figure 1(c) or high
fluctuations in Figure 1(d) only match the last event model ℳଷ.
2. DETECTION ALGORITHM
The main task of the event detection algorithm is to search for signal segments that match a given
event model ℳ. This is achieved by applying a clustering algorithm on different segments and
checking how much each segment matches the model. In all of the three models introduced in
section 2, the clustering cardinality ݉ is not known in advance. Therefore, a utilized clustering
algorithm should either be nonparametric or a model order estimation step has to take place
beforehand.
In our approach we utilized the commonly used Density-Based Spatial Clustering of Applications
with Noise (DBSCAN) algorithm [15]. The DBSCAN algorithm (or density-based clustering in
general) has several advantages that make it the best candidate for a non-parametric sequential
event detection. First, DBSCAN assumes no prior knowledge of the number of clusters. Second,
DBSCAN supports arbitrarily shaped clusters with no constraints on their samples’ distribution.
In addition, DBSCAN is a noise-aware clustering algorithm and, therefore, can be utilized with
any of the previously defined event models.
6. 82 Computer Science & Information Technology (CS & IT)
Ideally, the detection algorithm searches the input signal sequentially for segments that match a
given event model. However, we control the matching process with a proximity measure that
shows how much a segment matches the given model.
Definition 3: The model loss between an event model ℳ and a signal segment ܆ is defined as
ℒ(ℳ୧, ,܆ ,ݑ )ݒ = |ሼ࢞:݊ ≤ ݑ and ࢞ ∈ ܥଶሽ| +
|ሼ࢞: ݊ ≥ ݒ and ࢞ ∈ ܥଵሽ| + (7)
|ሼ࢞: ݑ < ݊ < ݒ and ࢞ ∈ ܥଵ ∪ ܥଶሽ|
where ݑ and ݒ are the indices of the first and last sample of the change-interval, respectively. In
the case of ℳଵ where ݒ = ݑ + 1, the last term in Equation 7 becomes zero regardless of .ݑ
The model loss function counts the number of samples that need to be corrected (i.e. reassigned
to a different set ܥ of the clustering structure) in order for the segment ܆ to match the event
model ℳ. The lower the loss, the more the signal segment matches the event model.
The proposed detection algorithm can then be presented as to two sub-tasks, the forward
detection step which is the main process for finding an event, and the backward reduction step
that is responsible for a more accurate segmentation.
In the forward detection step, new samples are received one at a time and inserted into the
clustering space. Upon insertion of a new sample, the clustering indices are updated and the
model loss is re-estimated. Once a match is encountered (i.e. the model loss is zero or less than a
predefined threshold ߣ), a detection is declared with the current change point ݑ of the matched
segment and the change-interval ܆௧ = ሾ࢞௨, ࢞௨ାଵ, … , ࢞௩ିଵ, ࢞௩ሿ where ࢞௩ is the first sample of the
second stationary segment.
Once an event is declared, the backward reduction step begins. In this step, samples are removed
from the clustering space in a First-In-First-Out (FIFO) fashion while updating the clustering
structure upon each deletion and re-estimating the model loss. The reduction ends by the last
sample that satisfies the matching condition (i.e. if that sample is deleted, the segment will no
longer matches the event model within the predefined threshold loss ߣ). The complete detection
algorithm can be described as follows. Given an event model ℳ
1. Receive new sample ࢞ேାଵ and append it to ܆
2. Update the clustering vector ࢟ and the clustering structure ൛ܥൟୀଵ
3. Check ℒ(ℳ୧, ,܆ ,ݑ )ݒ ≤ ߣ for all ,ݑ ,ݒ if not satisfied, go to step (1)
4. Declare event detection with change-interval ܆௧ = ሾ࢞௨, ࢞௨ାଵ, … , ࢞௩ିଵ, ࢞௩ሿ and change-point is
ݑ where ݑ and ݒ result in the minimum model loss between ℳ and the current segment ܆
(i.e. argmin௨,௩ ℒ(ℳ୧, ,܆ ,ݑ .))ݒ
5. Delete oldest sample ࢞ଵ from the segment
6. Update the clustering vector ࢟ and the clustering structure ൛ܥൟୀଵ
7. Check ℒ(ℳ୧, ,܆ ,ݑ )ݒ ≤ ߣ for all ,ݑ ,ݒ if satisfied, go to step (5)
8. Re-insert last sample and declare current segment ܆ as a balanced event.
7. Computer Science & Information Technology (CS & IT) 83
After each detection, the process restarts from the first sample of the second stationary segment
࢞௩. The main objective of the backward reduction step is to extract balanced stationary segments
(i.e. |ܥଵ| ≈ |ܥଶ|) around the transient interval. Balanced segments lead to more stable steady-state
features as well as an enhanced robustness to missed detections (i.e. false negatives).
2. EXPERIMENTS AND RESULTS
The proposed event detection approach has been evaluated on different power datasets among
them is the Building-Level fUlly labelled Electricity Disaggregation (BLUED) dataset [16]. In
the following, we show the results of applying the event detection algorithm with event model
ℳଷ and the DBSCAN clustering scheme on the BLUED dataset. We only show evaluation of
detection results. Evaluation of the accuracy of transient interval segmentation and the stability of
extracted features is beyond the scope of this paper.
Table 1 shows the event detection results on the real and reactive power signals from the BLUED
dataset. BLUED include aggregate measurements from a two-phase residential building (phase A
and B) and each is evaluated separately. True Positives (TP) is the number of successful
detections, False Positives (FP) is the number of detections that do not correspond to actual
events, while False Negatives (FN) is the number of missed events. Finally, False Positive
Percentage (FPP), precision, recall, and the F1-score measures are defined as
FPP =
ܲܨ
ܧ
(8)
precision =
ܶܲ
ܶܲ + ܲܨ
(8)
recall =
ܶܲ
ܶܲ + ܰܨ
(9)
ܨଵ − score =
2 × precision × recall
precision + recall
(10)
where ܧ is the number of events. Results show highly precise detection rates where the number
of false positives is relatively low in both phases. It is also observed that, noise in the second
phase (phase B) still masks a relatively large number of events.
Table 1. Event detection results on BLUED [16] dataset.
Phase A Phase B Total
Number of events ܧ 892 1609 2501
Number of detections 874 1176 2050
True Positives (TP) 867 1097 1964
False Positives (FP) 7 79 86
False Negatives (FN) 25 512 537
FPP 0.78% 4.91% 3.44%
precision ૢૢ. % ૢ. ૡ% ૢ. ૡ%
recall (TPR) ૢૠ. % ૡ. ૡ% ૠૡ. %
ܨଵ-score ૢૡ. ૢ% ૠૡ. ૠૡ% ૡ. %
8. 84 Computer Science & Information Technology (CS & IT)
3. CONCLUSIONS
We introduced a novel clustering-based approach for sequential event detection. The proposed
algorithm features accurate segmentation of the stationary and non-stationary intervals for more
stable feature extraction, support of arbitrarily shaped stationary segments with no prior
assumptions on their sample distribution, and more robustness to noise as well as parameter
variations.
REFERENCES
[1] G. W. Hart, “Nonintrusive appliance load monitoring”, in proceedings of the IEEE: vol.80, no.12, pp.
1870-1891, Dec. 1992. doi:10.1109/5.192069
[2] A. I. Cole and A. Albicki, "Data extraction for effective non-intrusive identification of residential
power loads", in proceedings of the Instrumentation and Measurement Technology Conference
(IMTC) 1998 IEEE: vol.2, pp.812-815, May 1998. doi:10.1109/IMTC.1998.676838
[3] S. Drenker and A. Kader, "Nonintrusive monitoring of electric loads", in Computer Applications in
Power, IEEE: vol.12, no.4, pp.47-51, Oct 1999. doi:10.1109/67.795138
[4] M. El Hachemi Benbouzid, “A review of induction motors signature analysis as a medium for faults
detection”, IEEE Transactions on Industrial Electronics, vo.47, no.5, pp.984-993, Oct 2000.
[5] S. N. Patel, T. Robertson, J. A. Kientz, M. S. Reynolds, and G. D. Abowd, “At the Flick of a Switch:
Detecting and Classifying Unique Electrical Events on the Residential Power Line”, in UbiComp
2007: Ubiquitous Computing, vol.4717, pp.271-288, 2007.
[6] M. Basseville and I. V. Nikiforov. Detection of Abrupt Changes: Theory and Application. Prentice
Hall, 1993.
[7] M. Berges, E. Goldman, L. Soibelman, H. S. Matthews, and K. Anderson, “User-centred non-
intrusive electricity load monitoring for residential buildings”, Journal of Computing in Civil
Engineering, vol.25, no.1, 2011.
[8] K. D. Anderson, M. E. Berges, A. Ocneanu, D. Benitez, and J. M. F. Moura, “Event Detection for
Non-Intrusive Load Monitoring”, in IECON 2012, 38th Annual Conference on IEEE Industrial
Electronics Society, October 2012.
[9] K. N. Trung, E. Dekneuvel, B. Nicolle, and O. Zammit, “Event Detection and Disaggregation
Algorithms for NIALM System”, in the 2nd International Non-Intrusive Load Monitoring (NILM)
Workshop, Jun 2014.
[10] Y. Jin, E. Tebekaemi, M. Berges, and L. Soibelman, “A time-frequency approach for event detection
in nonintrusive load monitoring” in proceedings of the Signal Processing, Sensor Fusion, and Target
Recognition, Orlando, Florida, USA, 2011.
[11] M. Volpi, D. Tuia, G. Camps-Valls, and M. Kanevski, "Unsupervised Change Detection With
Kernels", in Geoscience and Remote Sensing Letters, IEEE: vol.9, no.6, pp.1026-1030, Nov. 2012.
doi: 10.1109/LGRS.2012.2189092
[12] M. Luong, V. Perduca, and G. Nuel, “Hidden Markov Model Applications in Change-Point
Analysis”, arXiv Journal, preprint arXiv:1212.1778v1, 2012.
9. Computer Science & Information Technology (CS & IT) 85
[13] G. L. Grinblat, L. C. Uzal and P. M. Granitto, “Abrupt change detection with one-class time-adaptive
support vector machines”, Expert Systems with Applications Journal, vol. 40, pp. 7242–7249, 2013.
[14] S. B. Leeb, S. R. Shaw, and Jr. Kirtley, J. L. “Transient event detection in spectral envelope estimates
for nonintrusive load monitoring”, IEEE Transactions on Power Delivery, 10(3):1200–1210, July
1995.
[15] M. Ester, H. P. Kriegel, J. Sander, and X. Xu, “A density based algorithm for discovering clusters in
large spatial databases with noise” in proceedings of Knowledge Discovery and Data mining (KDD),
1996.
[16] K. Anderson, A. Ocneanu, D. Benitez, D. Carlson, A. Rowe, and M. Berges, “BLUED: a fully
labeled public dataset for Event-Based Non-Intrusive load monitoring research”, in proceedings of
the 2nd Knowledge Discovery and Data mining (KDD) Workshop on Data Mining Applications in
Sustainability (SustKDD), Beijing, China, August 2012. URL: http://nilm.cmubi.org/