This document proposes an analytical framework called FESM for evaluating and comparing elastic similarity measures for time series pattern recognition. FESM has three main components: 1) It classifies elastic similarity measures into two approaches - those based on Lp norms and those based on matching thresholds. 2) It evaluates the classified similarity measures using proposed qualitative criteria. 3) It determines the appropriate application scopes for the classified similarity measures. FESM is intended to help users quickly understand and select the best existing elastic similarity measure for a given time series pattern recognition task.
Forecasting of electric consumption in a semiconductor plant using time serie...Alexander Decker
This document summarizes a study that used time series methods to forecast electricity consumption in a semiconductor plant. The study analyzed 36 months of historical electricity consumption data from 2010-2012 to select the best forecasting model. Single exponential smoothing was found to have the lowest Mean Absolute Percentage Error (MAPE) of 5.60% and was determined to be the best forecasting method. The selected model will be used to forecast future electricity consumption for the plant.
This document discusses recursive least-squares estimation when observation data contains interval uncertainty, also known as imprecision, in addition to random variability. It introduces a recursive formulation of least-squares estimation that efficiently combines the most recent parameter estimate with new observation data. Overestimation is a key challenge for recursive formulations when working with interval data that must be rigorously avoided. The paper also presents an illustrative example of estimating the state of a damped harmonic oscillation using the proposed recursive interval least-squares approach.
This document summarizes a new technique for demand forecasting called exponentially smoothed regression analysis. The technique uses regression models to separately estimate trend and multiplicative seasonality in time series data. It draws on the power of regression analysis while allowing estimates to be smoothed over time like exponential smoothing methods. The technique estimates seasonal factors, deseasonalizes the data, then estimates trend and base demand through regression. This allows proper separation of trend and seasonality effects compared to other methods. The technique is computationally efficient and easy to implement.
This document summarizes and analyzes the performance of Newton's method, BFGS method, and SR1 method for minimizing a quadratic and convex function. It finds that:
1) Newton's method performed the best, requiring fewer iterations and achieving greater accuracy than the other methods.
2) For constrained problems, the SR1 method achieved some success due to its flexibility in not always requiring a descent direction.
3) While Newton's method has the best theoretical convergence rate, quasi-Newton methods are more applicable to complex problems as hessian inversion becomes more computationally expensive.
4) When minimizing quadratic and convex functions, Newton's method generally performs better than the other tested methods. However, the best
The document describes a Stata package of programs for estimating panel vector autoregression (VAR) models. The package allows for convenient estimation, model selection, inference and other analyses of panel VAR models using generalized method of moments in a Stata environment. The programs address panel VAR specification, estimation, model selection criteria, impulse response analyses, and forecast error variance decomposition. The syntax and outputs of the commands are designed to be similar to Stata's built-in VAR commands for time series data.
This document proposes and evaluates a new metaheuristic optimization algorithm called Current Search (CS) and applies it to optimize PID controller parameters for DC motor speed control. The CS is inspired by electric current flow and aims to balance exploration and exploitation. It outperforms genetic algorithm, particle swarm optimization, and adaptive tabu search on benchmark optimization problems, finding better solutions faster. When applied to optimize a PID controller for DC motor speed control, the CS successfully controlled motor speed.
Week 4 forecasting - time series - smoothing and decomposition - m.awaluddin.tMaling Senk
Forecasting - time series - smoothing and decomposition methods
Smoothing Method as Moving Averages and exponetial methods. The steps for decomposition methods and example of it. Case study for smothing methods in Single Exponential Smoothing, Double Exponential Smoothing and Triple Exponential Smoothing
Oscar Nieves (11710858) Computational Physics Project - Inverted PendulumOscar Nieves
This document describes a numerical simulation of an inverted pendulum system created in MATLAB using a 4th order Runge-Kutta algorithm. The simulation models an inverted pendulum attached to a horizontally moving cart. Forces like air drag and friction are included, and parameters like mass, pendulum length, and initial conditions can be varied. Small changes to initial conditions can lead to large differences in motion, demonstrating the system's chaotic behavior. The document also outlines the methodology for adapting the Runge-Kutta algorithm to solve systems of coupled differential equations.
Forecasting of electric consumption in a semiconductor plant using time serie...Alexander Decker
This document summarizes a study that used time series methods to forecast electricity consumption in a semiconductor plant. The study analyzed 36 months of historical electricity consumption data from 2010-2012 to select the best forecasting model. Single exponential smoothing was found to have the lowest Mean Absolute Percentage Error (MAPE) of 5.60% and was determined to be the best forecasting method. The selected model will be used to forecast future electricity consumption for the plant.
This document discusses recursive least-squares estimation when observation data contains interval uncertainty, also known as imprecision, in addition to random variability. It introduces a recursive formulation of least-squares estimation that efficiently combines the most recent parameter estimate with new observation data. Overestimation is a key challenge for recursive formulations when working with interval data that must be rigorously avoided. The paper also presents an illustrative example of estimating the state of a damped harmonic oscillation using the proposed recursive interval least-squares approach.
This document summarizes a new technique for demand forecasting called exponentially smoothed regression analysis. The technique uses regression models to separately estimate trend and multiplicative seasonality in time series data. It draws on the power of regression analysis while allowing estimates to be smoothed over time like exponential smoothing methods. The technique estimates seasonal factors, deseasonalizes the data, then estimates trend and base demand through regression. This allows proper separation of trend and seasonality effects compared to other methods. The technique is computationally efficient and easy to implement.
This document summarizes and analyzes the performance of Newton's method, BFGS method, and SR1 method for minimizing a quadratic and convex function. It finds that:
1) Newton's method performed the best, requiring fewer iterations and achieving greater accuracy than the other methods.
2) For constrained problems, the SR1 method achieved some success due to its flexibility in not always requiring a descent direction.
3) While Newton's method has the best theoretical convergence rate, quasi-Newton methods are more applicable to complex problems as hessian inversion becomes more computationally expensive.
4) When minimizing quadratic and convex functions, Newton's method generally performs better than the other tested methods. However, the best
The document describes a Stata package of programs for estimating panel vector autoregression (VAR) models. The package allows for convenient estimation, model selection, inference and other analyses of panel VAR models using generalized method of moments in a Stata environment. The programs address panel VAR specification, estimation, model selection criteria, impulse response analyses, and forecast error variance decomposition. The syntax and outputs of the commands are designed to be similar to Stata's built-in VAR commands for time series data.
This document proposes and evaluates a new metaheuristic optimization algorithm called Current Search (CS) and applies it to optimize PID controller parameters for DC motor speed control. The CS is inspired by electric current flow and aims to balance exploration and exploitation. It outperforms genetic algorithm, particle swarm optimization, and adaptive tabu search on benchmark optimization problems, finding better solutions faster. When applied to optimize a PID controller for DC motor speed control, the CS successfully controlled motor speed.
Week 4 forecasting - time series - smoothing and decomposition - m.awaluddin.tMaling Senk
Forecasting - time series - smoothing and decomposition methods
Smoothing Method as Moving Averages and exponetial methods. The steps for decomposition methods and example of it. Case study for smothing methods in Single Exponential Smoothing, Double Exponential Smoothing and Triple Exponential Smoothing
Oscar Nieves (11710858) Computational Physics Project - Inverted PendulumOscar Nieves
This document describes a numerical simulation of an inverted pendulum system created in MATLAB using a 4th order Runge-Kutta algorithm. The simulation models an inverted pendulum attached to a horizontally moving cart. Forces like air drag and friction are included, and parameters like mass, pendulum length, and initial conditions can be varied. Small changes to initial conditions can lead to large differences in motion, demonstrating the system's chaotic behavior. The document also outlines the methodology for adapting the Runge-Kutta algorithm to solve systems of coupled differential equations.
This document discusses and compares instance-based and feature-based approaches to time series classification. Instance-based classification involves directly measuring distances between time series in the time domain, while feature-based classification extracts thousands of features from time series using time series analysis algorithms and learns classifiers based on those features. The document presents an approach that uses extensive feature extraction and feature selection to build highly comparative feature-based classifiers, and evaluates this approach on 20 publicly available datasets.
Computational Complexity Comparison Of Multi-Sensor Single Target Data Fusion...ijccmsjournal
This document compares the computational complexity of four multi-sensor data fusion methods based on the Kalman filter using MATLAB simulations. The four methods are: group-sensor method, sequential-sensor method, inverse covariance form, and track-to-track fusion. The results show that the inverse covariance method has the best computational performance if the number of sensors is above 20. For fewer sensors, other methods like the group sensors method are more appropriate due to lower computational loads when inverting smaller matrices.
COMPUTATIONAL COMPLEXITY COMPARISON OF MULTI-SENSOR SINGLE TARGET DATA FUSION...ijccmsjournal
Target tracking using observations from multiple sensors can achieve better estimation performance than a single sensor. The most famous estimation tool in target tracking is Kalman filter. There are several mathematical approaches to combine the observations of multiple sensors by use of Kalman filter. An
important issue in applying a proper approach is computational complexity. In this paper, four data fusion algorithms based on Kalman filter are considered including three centralized and one decentralized methods. Using MATLAB, computational loads of these methods are compared while number of sensors
increases. The results show that inverse covariance method has the best computational performance if the number of sensors is above 20. For a smaller number of sensors, other methods, especially group sensors, are more appropriate..
Some Reviews on Circularity Evaluation using Non- Linear Optimization TechniquesIRJET Journal
This document provides a review of various nonlinear optimization techniques that can be used to evaluate circularity based on measured coordinate data. It discusses techniques such as minimum zone circle, least squares circle, maximum inscribed circle, and minimum circumscribed circle for data fitting to evaluate circularity error. It also reviews optimization algorithms like genetic algorithms and particle swarm optimization that can be applied to determine the optimal circle fit to minimize circularity error. The document concludes that genetic algorithms and hybrid optimization techniques that combine multiple algorithms can provide efficient evaluations of circularity compared to other discussed methods.
Visual tracking using particle swarm optimizationcsandit
The problem of robust extraction of visual odometry from a sequence of images obtained by an
eye in hand camera configuration is addressed. A novel approach toward solving planar
template based tracking is proposed which performs a non-linear image alignment for
successful retrieval of camera transformations. In order to obtain global optimum a biometaheuristic
is used for optimization of similarity among the planar regions. The proposed
method is validated on image sequences with real as well as synthetic transformations and
found to be resilient to intensity variations. A comparative analysis of the various similarity
measures as well as various state-of-art methods reveal that the algorithm succeeds in tracking
the planar regions robustly and has good potential to be used in real applications.
Possible limits of accuracy in measurement of fundamental physical constantsirjes
The measurement uncertainties of Fundamental Physical Constants should take into account all
possible and most influencing factors. One from them is the finiteness of the model that causes the existence of
a-priori error. The proposed formula for calculation of this error provides a comparison of its value with the
actual experimental measurement error that cannot be done an arbitrarily small. According to the suggested
approach, the error of the researched Fundamental Physical Constant, measured in conventional field studies,
will always be higher than the error caused by the finite number of dimensional recorded variables of physicalmathematical
models. Examples of practical application of the considered concept for measurement of fine
structure constant, speed of light and Newtonian constant of gravitation are discussed.
Refining Underwater Target Localization and Tracking EstimatesCSCJournals
Improving the accuracy and reliability of the localization estimates and tracking of underwater targets is a constant quest in ocean surveillance operations. The localization estimates may vary owing to various noises and interferences such as sensor errors and environmental noises. Even though adaptive filters like the Kalman filter subdue these problems and yield dependable results, targets that undergo maneuvering can cause incomprehensible errors, unless suitable corrective measures are implemented. Simulation studies on improving the localization and tracking estimates for a stationary target as well as a moving target including the maneuvering situations are presented in this paper
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
Accurate time series classification using shapeletsIJDKP
Time series data are sequences of values measured over time. One of the most recent approaches to
classification of time series data is to find shapelets within a data set. Time series shapelets are time series
subsequences which represent a class. In order to compare two time series sequences, existing work uses
Euclidean distance measure. The problem with Euclidean distance is that it requires data to be
standardized if scales differ. In this paper, we perform classification of time series data using time series
shapelets and used Mahalanobis distance measure. The Mahalanobis distance is a descriptive statistic
that provides a relative measure of a data point's distance (residual) from a common point. The
Mahalanobis distance is used to identify and gauge similarity of an unknown sample set to a known one. It
differs from Euclidean distance in that it takes into account the correlations of the data set and is scaleinvariant.
We show that Mahalanobis distance results in more accuracy than Euclidean distance measure
Advanced cosine measures for collaborative filteringLoc Nguyen
Cosine similarity is an important measure to compare two vectors for many researches in data mining and information retrieval. In this research, cosine measure and its advanced variants for collaborating filtering (CF) are evaluated. Cosine measure is effective but it has a drawback that there may be two end points of two vectors which are far from each other according to Euclidean distance, but their cosine is high. This is negative effect of Euclidean distance which decreases accuracy of cosine similarity. Therefore, a so-called triangle area (TA) measure is proposed as an improved version of cosine measure. TA measure uses ratio of basic triangle area to whole triangle area as reinforced factor for Euclidean distance so that it can alleviate negative effect of Euclidean distance whereas it keeps simplicity and effectiveness of both cosine measure and Euclidean distance in making similarity of two vectors. TA is considered as an advanced cosine measure. TA and other advanced cosine measures are tested with other similarity measures. From experimental results, TA is not a preeminent measure but it is better than traditional cosine measures in most cases and it is also adequate to real-time application. Moreover, its formula is simple too.
Maximum likelihood estimation-assisted ASVSF through state covariance-based 2...TELKOMNIKA JOURNAL
The smooth variable structure filter (ASVSF) has been relatively considered as a new robust predictor-corrector method for estimating the state. In order to effectively utilize it, an SVSF requires the accurate system model, and exact prior knowledge includes both the process and measurement noise statistic. Unfortunately, the system model is always inaccurate because of some considerations avoided at the beginning. Moreover, the small addictive noises are partially known or even unknown. Of course, this limitation can degrade the performance of SVSF or also lead to divergence condition. For this reason, it is proposed through this paper an adaptive smooth variable structure filter (ASVSF) by conditioning the probability density function of a measurement
to the unknown parameters at one iteration. This proposed method is assumed to accomplish the localization and direct point-based observation task of a wheeled mobile robot, TurtleBot2. Finally, by realistically simulating it and comparing to a conventional method, the proposed method has been showing a better accuracy and stability in term of root mean square error (RMSE) of the estimated map coordinate (EMC) and estimated path coordinate (EPC).
A Singular Spectrum Analysis Technique to Electricity Consumption ForecastingIJERA Editor
Singular Spectrum Analysis (SSA) is a relatively new and powerful nonparametric tool for analyzing and forecasting economic data. SSA is capable of decomposing the main time series into independent components like trends, oscillatory manner and noise. This paper focuses on employing the performance of SSA approach to the monthly electricity consumption of the Middle Province in Gaza Strip\Palestine. The forecasting results are compared with the results of exponential smoothing state space (ETS) and ARIMA models. The three techniques do similarly well in forecasting process. However, SSA outperforms the ETS and ARIMA techniques according to forecasting error accuracy measures
The models, principles and steps of Bayesian time series analysis and forecasting have been established extensively during the past fifty years. In order to estimate parameters of an autoregressive (AR) model we develop Markov chain Monte Carlo (MCMC) schemes for inference of AR model. It is our interest to propose a new prior distribution placed directly on the AR parameters of the model. Thus, we revisit the stationarity conditions to determine a flexible prior for AR model parameters. A MCMC procedure is proposed to estimate coefficients of AR(p) model. In order to set Bayesian steps, we determined prior distribution with the purpose of applying MCMC. We advocate the use of prior distribution placed directly on parameters. We have proposed a set of sufficient stationarity conditions for autoregressive models of any lag order. In this thesis, a set of new stationarity conditions have been proposed for the AR model. We motivated the new methodology by considering the autoregressive model of AR(2) and AR(3). Additionally, through simulation we studied sufficiency and necessity of the proposed conditions of stationarity. The researcher, additionally draw parameter space of AR(3) model for stationary region of Barndorff-Nielsen and Schou (1973) and our new suggested condition. A new prior distribution has been proposed placed directly on the parameters of the AR(p) model. This is motivated by priors proposed for the AR(1), AR(2),..., AR(6), which take advantage of the range of the AR parameters. We then develop a Metropolis step within Gibbs sampling for estimation. This scheme is illustrated using simulated data, for the AR(2), AR(3) and AR(4) models and extended to models with higher lag order. The thesis compared the new proposed prior distribution with the prior distributions obtained from the correspondence relationship between partial autocorrelations and parameters discussed by Barndorff-Nielsen and Schou (1973).
Design of State Estimator for a Class of Generalized Chaotic Systemsijtsrd
In this paper, a class of generalized chaotic systems is considered and the state observation problem of such a system is investigated. Based on the time domain approach with differential inequality, a simple state estimator for such generalized chaotic systems is developed to guarantee the global exponential stability of the resulting error system. Besides, the guaranteed exponential decay rate can be correctly estimated. Finally, several numerical simulations are given to show the effectiveness of the obtained result. Yeong-Jeu Sun "Design of State Estimator for a Class of Generalized Chaotic Systems" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-6 , October 2019, URL: https://www.ijtsrd.com/papers/ijtsrd29270.pdf Paper URL: https://www.ijtsrd.com/engineering/electrical-engineering/29270/design-of-state-estimator-for-a-class-of-generalized-chaotic-systems/yeong-jeu-sun
Investigation of Parameter Behaviors in Stationarity of Autoregressive and Mo...BRNSS Publication Hub
The most important assumption about time series and econometrics data is stationarity. Therefore, this study focuses on behaviors of some parameters in stationarity of autoregressive (AR) and moving average (MA) models. Simulation studies were conducted using R statistical software to investigate the parameter values at different orders (p) of AR and (q) of MA models, and different sample sizes. The stationary status of the p and q are, respectively, determined, parameters such as mean, variance, autocorrelation function (ACF), and partial autocorrelation function (PACF) were determined. The study concluded that the absolute values of ACF and PACF of AR and MA models increase as the parameter values increase but decrease with increase of their orders which as a result, tends to zero at higher lag orders. This is clearly observed in large sample size (n = 300). However, their values decline as sample size increases when compared by orders across the sample sizes. Furthermore, it was observed that the means values of the AR and MA models of first order increased with increased in parameter but decreased when sample sizes were decreased, which tend to zero at large sample sizes, so also the variances
Similarity Measures for Traditional Turkish Art MusicCSCJournals
Pitch histograms are frequently used for a wide range of applications in music information retrieval (MIR) which mainly focus on western music. However there are significant differences between pitch spaces of traditional Turkish art music (TTAM) and western music which prevent to apply current methods. In this sense comparison of pitch histograms for TTAM corresponds to the research domain in pattern recognition: finding an appropriate similarity measure in relation with the metric axioms and characteristics of the data. Therefore we have evaluated various similarity measures frequently used in histogram comparison such as L1-norm, L2-norm, histogram intersection, correlation coefficient measures and earth mover’s distance (EMD) for TTAM. Consequently we have discussed one of the problems of the domain, about measures regarding overlap or/and non-overlap between ordinal type histograms and presented an improved version of EMD for TTAM.
Field Strength Predicting Outdoor ModelsIRJET Journal
This document discusses several outdoor propagation models used to predict radio signal strength and path loss over distance. It begins by introducing concepts of transmission power, signal strength, and path loss. It then describes common factors that affect outdoor radio propagation like diffraction, reflection, refraction, and scattering. The rest of the document summarizes several empirical, theoretical, and physical propagation models including Okumura, Hata, ECC-33, COST-231 Hata, and Egli models. These models use different methods and equations to predict path loss and were developed based on extensive radio signal measurement data in various outdoor environments.
The document discusses the challenges of traditional analytics tools in performing data discovery on large datasets. It introduces the Urika-GD appliance as addressing these challenges in three key ways:
1. It uses a graph database to dynamically identify relationships between new data sources without predefined schemas.
2. It leverages massive multithreading and a purpose-built hardware accelerator to return real-time results to complex ad-hoc queries as datasets grow.
3. Its large shared memory architecture of up to 512TB allows data to be accessed virtually randomly without predictable patterns, unlike traditional tools requiring data partitioning.
El documento describe cuatro objetivos principales: 1) procesar información de acuerdo a las necesidades de la organización, 2) apoyar el sistema de información contable de acuerdo a la normatividad, 3) promover la interacción adecuada con uno mismo, los demás y la naturaleza en contextos laborales y sociales, y 4) aplicar conocimientos y habilidades aprendidas para resolver problemas reales del sector productivo.
Western Copper and Gold Corporate Presentation - June 2015PaceCreativeGroup
Western Copper and Gold is developing the Casino copper-gold mine in Canada's Yukon Territory. The Casino project has over 1 billion tonnes of proven mineral reserves containing 4.5 billion pounds of copper and 8.9 million ounces of gold. A 2013 feasibility study showed the project has a post-tax NPV of $1.83 billion and an IRR of 20.1% using long-term metal price assumptions. Western Copper is de-risking the project through permitting, engineering studies, and securing partnerships.
SPC 11 John Hyde_Presentation_Seoul.10.13John Hyde
John Hyde, a board member of GOPAC, presented on the role of parliamentarians in fighting corruption. He discussed how corruption costs governments trillions of dollars annually, hindering efforts to achieve development goals like the MDGs. GOPAC works with 700 parliamentarian members across 48 countries to draft anti-corruption laws and establish oversight bodies. Examples of GOPAC's successes include reforms in the Philippines, Zambia, Timor Leste, and others. Challenges include competing priorities for parliamentarians and measuring effectiveness. GOPAC is developing a new anti-corruption index to better evaluate country progress.
The document compares functional outcomes between pediatric and adult patients with traumatic brain injury (TBI) who underwent inpatient rehabilitation. It finds:
1) Increasing age was associated with improved outcomes in children but poorer outcomes in adults, as measured by Functional Independence Measure (FIM) scores.
2) Several factors like gender, Glasgow Coma Scale scores, and presence of midline shift differed between pediatric and adult groups and impacted functional outcomes.
3) The relationship between age and functional outcome after TBI differs between pediatric and adult populations, with moderating variables also having different effects between the two age groups.
This document discusses and compares instance-based and feature-based approaches to time series classification. Instance-based classification involves directly measuring distances between time series in the time domain, while feature-based classification extracts thousands of features from time series using time series analysis algorithms and learns classifiers based on those features. The document presents an approach that uses extensive feature extraction and feature selection to build highly comparative feature-based classifiers, and evaluates this approach on 20 publicly available datasets.
Computational Complexity Comparison Of Multi-Sensor Single Target Data Fusion...ijccmsjournal
This document compares the computational complexity of four multi-sensor data fusion methods based on the Kalman filter using MATLAB simulations. The four methods are: group-sensor method, sequential-sensor method, inverse covariance form, and track-to-track fusion. The results show that the inverse covariance method has the best computational performance if the number of sensors is above 20. For fewer sensors, other methods like the group sensors method are more appropriate due to lower computational loads when inverting smaller matrices.
COMPUTATIONAL COMPLEXITY COMPARISON OF MULTI-SENSOR SINGLE TARGET DATA FUSION...ijccmsjournal
Target tracking using observations from multiple sensors can achieve better estimation performance than a single sensor. The most famous estimation tool in target tracking is Kalman filter. There are several mathematical approaches to combine the observations of multiple sensors by use of Kalman filter. An
important issue in applying a proper approach is computational complexity. In this paper, four data fusion algorithms based on Kalman filter are considered including three centralized and one decentralized methods. Using MATLAB, computational loads of these methods are compared while number of sensors
increases. The results show that inverse covariance method has the best computational performance if the number of sensors is above 20. For a smaller number of sensors, other methods, especially group sensors, are more appropriate..
Some Reviews on Circularity Evaluation using Non- Linear Optimization TechniquesIRJET Journal
This document provides a review of various nonlinear optimization techniques that can be used to evaluate circularity based on measured coordinate data. It discusses techniques such as minimum zone circle, least squares circle, maximum inscribed circle, and minimum circumscribed circle for data fitting to evaluate circularity error. It also reviews optimization algorithms like genetic algorithms and particle swarm optimization that can be applied to determine the optimal circle fit to minimize circularity error. The document concludes that genetic algorithms and hybrid optimization techniques that combine multiple algorithms can provide efficient evaluations of circularity compared to other discussed methods.
Visual tracking using particle swarm optimizationcsandit
The problem of robust extraction of visual odometry from a sequence of images obtained by an
eye in hand camera configuration is addressed. A novel approach toward solving planar
template based tracking is proposed which performs a non-linear image alignment for
successful retrieval of camera transformations. In order to obtain global optimum a biometaheuristic
is used for optimization of similarity among the planar regions. The proposed
method is validated on image sequences with real as well as synthetic transformations and
found to be resilient to intensity variations. A comparative analysis of the various similarity
measures as well as various state-of-art methods reveal that the algorithm succeeds in tracking
the planar regions robustly and has good potential to be used in real applications.
Possible limits of accuracy in measurement of fundamental physical constantsirjes
The measurement uncertainties of Fundamental Physical Constants should take into account all
possible and most influencing factors. One from them is the finiteness of the model that causes the existence of
a-priori error. The proposed formula for calculation of this error provides a comparison of its value with the
actual experimental measurement error that cannot be done an arbitrarily small. According to the suggested
approach, the error of the researched Fundamental Physical Constant, measured in conventional field studies,
will always be higher than the error caused by the finite number of dimensional recorded variables of physicalmathematical
models. Examples of practical application of the considered concept for measurement of fine
structure constant, speed of light and Newtonian constant of gravitation are discussed.
Refining Underwater Target Localization and Tracking EstimatesCSCJournals
Improving the accuracy and reliability of the localization estimates and tracking of underwater targets is a constant quest in ocean surveillance operations. The localization estimates may vary owing to various noises and interferences such as sensor errors and environmental noises. Even though adaptive filters like the Kalman filter subdue these problems and yield dependable results, targets that undergo maneuvering can cause incomprehensible errors, unless suitable corrective measures are implemented. Simulation studies on improving the localization and tracking estimates for a stationary target as well as a moving target including the maneuvering situations are presented in this paper
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
Accurate time series classification using shapeletsIJDKP
Time series data are sequences of values measured over time. One of the most recent approaches to
classification of time series data is to find shapelets within a data set. Time series shapelets are time series
subsequences which represent a class. In order to compare two time series sequences, existing work uses
Euclidean distance measure. The problem with Euclidean distance is that it requires data to be
standardized if scales differ. In this paper, we perform classification of time series data using time series
shapelets and used Mahalanobis distance measure. The Mahalanobis distance is a descriptive statistic
that provides a relative measure of a data point's distance (residual) from a common point. The
Mahalanobis distance is used to identify and gauge similarity of an unknown sample set to a known one. It
differs from Euclidean distance in that it takes into account the correlations of the data set and is scaleinvariant.
We show that Mahalanobis distance results in more accuracy than Euclidean distance measure
Advanced cosine measures for collaborative filteringLoc Nguyen
Cosine similarity is an important measure to compare two vectors for many researches in data mining and information retrieval. In this research, cosine measure and its advanced variants for collaborating filtering (CF) are evaluated. Cosine measure is effective but it has a drawback that there may be two end points of two vectors which are far from each other according to Euclidean distance, but their cosine is high. This is negative effect of Euclidean distance which decreases accuracy of cosine similarity. Therefore, a so-called triangle area (TA) measure is proposed as an improved version of cosine measure. TA measure uses ratio of basic triangle area to whole triangle area as reinforced factor for Euclidean distance so that it can alleviate negative effect of Euclidean distance whereas it keeps simplicity and effectiveness of both cosine measure and Euclidean distance in making similarity of two vectors. TA is considered as an advanced cosine measure. TA and other advanced cosine measures are tested with other similarity measures. From experimental results, TA is not a preeminent measure but it is better than traditional cosine measures in most cases and it is also adequate to real-time application. Moreover, its formula is simple too.
Maximum likelihood estimation-assisted ASVSF through state covariance-based 2...TELKOMNIKA JOURNAL
The smooth variable structure filter (ASVSF) has been relatively considered as a new robust predictor-corrector method for estimating the state. In order to effectively utilize it, an SVSF requires the accurate system model, and exact prior knowledge includes both the process and measurement noise statistic. Unfortunately, the system model is always inaccurate because of some considerations avoided at the beginning. Moreover, the small addictive noises are partially known or even unknown. Of course, this limitation can degrade the performance of SVSF or also lead to divergence condition. For this reason, it is proposed through this paper an adaptive smooth variable structure filter (ASVSF) by conditioning the probability density function of a measurement
to the unknown parameters at one iteration. This proposed method is assumed to accomplish the localization and direct point-based observation task of a wheeled mobile robot, TurtleBot2. Finally, by realistically simulating it and comparing to a conventional method, the proposed method has been showing a better accuracy and stability in term of root mean square error (RMSE) of the estimated map coordinate (EMC) and estimated path coordinate (EPC).
A Singular Spectrum Analysis Technique to Electricity Consumption ForecastingIJERA Editor
Singular Spectrum Analysis (SSA) is a relatively new and powerful nonparametric tool for analyzing and forecasting economic data. SSA is capable of decomposing the main time series into independent components like trends, oscillatory manner and noise. This paper focuses on employing the performance of SSA approach to the monthly electricity consumption of the Middle Province in Gaza Strip\Palestine. The forecasting results are compared with the results of exponential smoothing state space (ETS) and ARIMA models. The three techniques do similarly well in forecasting process. However, SSA outperforms the ETS and ARIMA techniques according to forecasting error accuracy measures
The models, principles and steps of Bayesian time series analysis and forecasting have been established extensively during the past fifty years. In order to estimate parameters of an autoregressive (AR) model we develop Markov chain Monte Carlo (MCMC) schemes for inference of AR model. It is our interest to propose a new prior distribution placed directly on the AR parameters of the model. Thus, we revisit the stationarity conditions to determine a flexible prior for AR model parameters. A MCMC procedure is proposed to estimate coefficients of AR(p) model. In order to set Bayesian steps, we determined prior distribution with the purpose of applying MCMC. We advocate the use of prior distribution placed directly on parameters. We have proposed a set of sufficient stationarity conditions for autoregressive models of any lag order. In this thesis, a set of new stationarity conditions have been proposed for the AR model. We motivated the new methodology by considering the autoregressive model of AR(2) and AR(3). Additionally, through simulation we studied sufficiency and necessity of the proposed conditions of stationarity. The researcher, additionally draw parameter space of AR(3) model for stationary region of Barndorff-Nielsen and Schou (1973) and our new suggested condition. A new prior distribution has been proposed placed directly on the parameters of the AR(p) model. This is motivated by priors proposed for the AR(1), AR(2),..., AR(6), which take advantage of the range of the AR parameters. We then develop a Metropolis step within Gibbs sampling for estimation. This scheme is illustrated using simulated data, for the AR(2), AR(3) and AR(4) models and extended to models with higher lag order. The thesis compared the new proposed prior distribution with the prior distributions obtained from the correspondence relationship between partial autocorrelations and parameters discussed by Barndorff-Nielsen and Schou (1973).
Design of State Estimator for a Class of Generalized Chaotic Systemsijtsrd
In this paper, a class of generalized chaotic systems is considered and the state observation problem of such a system is investigated. Based on the time domain approach with differential inequality, a simple state estimator for such generalized chaotic systems is developed to guarantee the global exponential stability of the resulting error system. Besides, the guaranteed exponential decay rate can be correctly estimated. Finally, several numerical simulations are given to show the effectiveness of the obtained result. Yeong-Jeu Sun "Design of State Estimator for a Class of Generalized Chaotic Systems" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-6 , October 2019, URL: https://www.ijtsrd.com/papers/ijtsrd29270.pdf Paper URL: https://www.ijtsrd.com/engineering/electrical-engineering/29270/design-of-state-estimator-for-a-class-of-generalized-chaotic-systems/yeong-jeu-sun
Investigation of Parameter Behaviors in Stationarity of Autoregressive and Mo...BRNSS Publication Hub
The most important assumption about time series and econometrics data is stationarity. Therefore, this study focuses on behaviors of some parameters in stationarity of autoregressive (AR) and moving average (MA) models. Simulation studies were conducted using R statistical software to investigate the parameter values at different orders (p) of AR and (q) of MA models, and different sample sizes. The stationary status of the p and q are, respectively, determined, parameters such as mean, variance, autocorrelation function (ACF), and partial autocorrelation function (PACF) were determined. The study concluded that the absolute values of ACF and PACF of AR and MA models increase as the parameter values increase but decrease with increase of their orders which as a result, tends to zero at higher lag orders. This is clearly observed in large sample size (n = 300). However, their values decline as sample size increases when compared by orders across the sample sizes. Furthermore, it was observed that the means values of the AR and MA models of first order increased with increased in parameter but decreased when sample sizes were decreased, which tend to zero at large sample sizes, so also the variances
Similarity Measures for Traditional Turkish Art MusicCSCJournals
Pitch histograms are frequently used for a wide range of applications in music information retrieval (MIR) which mainly focus on western music. However there are significant differences between pitch spaces of traditional Turkish art music (TTAM) and western music which prevent to apply current methods. In this sense comparison of pitch histograms for TTAM corresponds to the research domain in pattern recognition: finding an appropriate similarity measure in relation with the metric axioms and characteristics of the data. Therefore we have evaluated various similarity measures frequently used in histogram comparison such as L1-norm, L2-norm, histogram intersection, correlation coefficient measures and earth mover’s distance (EMD) for TTAM. Consequently we have discussed one of the problems of the domain, about measures regarding overlap or/and non-overlap between ordinal type histograms and presented an improved version of EMD for TTAM.
Field Strength Predicting Outdoor ModelsIRJET Journal
This document discusses several outdoor propagation models used to predict radio signal strength and path loss over distance. It begins by introducing concepts of transmission power, signal strength, and path loss. It then describes common factors that affect outdoor radio propagation like diffraction, reflection, refraction, and scattering. The rest of the document summarizes several empirical, theoretical, and physical propagation models including Okumura, Hata, ECC-33, COST-231 Hata, and Egli models. These models use different methods and equations to predict path loss and were developed based on extensive radio signal measurement data in various outdoor environments.
The document discusses the challenges of traditional analytics tools in performing data discovery on large datasets. It introduces the Urika-GD appliance as addressing these challenges in three key ways:
1. It uses a graph database to dynamically identify relationships between new data sources without predefined schemas.
2. It leverages massive multithreading and a purpose-built hardware accelerator to return real-time results to complex ad-hoc queries as datasets grow.
3. Its large shared memory architecture of up to 512TB allows data to be accessed virtually randomly without predictable patterns, unlike traditional tools requiring data partitioning.
El documento describe cuatro objetivos principales: 1) procesar información de acuerdo a las necesidades de la organización, 2) apoyar el sistema de información contable de acuerdo a la normatividad, 3) promover la interacción adecuada con uno mismo, los demás y la naturaleza en contextos laborales y sociales, y 4) aplicar conocimientos y habilidades aprendidas para resolver problemas reales del sector productivo.
Western Copper and Gold Corporate Presentation - June 2015PaceCreativeGroup
Western Copper and Gold is developing the Casino copper-gold mine in Canada's Yukon Territory. The Casino project has over 1 billion tonnes of proven mineral reserves containing 4.5 billion pounds of copper and 8.9 million ounces of gold. A 2013 feasibility study showed the project has a post-tax NPV of $1.83 billion and an IRR of 20.1% using long-term metal price assumptions. Western Copper is de-risking the project through permitting, engineering studies, and securing partnerships.
SPC 11 John Hyde_Presentation_Seoul.10.13John Hyde
John Hyde, a board member of GOPAC, presented on the role of parliamentarians in fighting corruption. He discussed how corruption costs governments trillions of dollars annually, hindering efforts to achieve development goals like the MDGs. GOPAC works with 700 parliamentarian members across 48 countries to draft anti-corruption laws and establish oversight bodies. Examples of GOPAC's successes include reforms in the Philippines, Zambia, Timor Leste, and others. Challenges include competing priorities for parliamentarians and measuring effectiveness. GOPAC is developing a new anti-corruption index to better evaluate country progress.
The document compares functional outcomes between pediatric and adult patients with traumatic brain injury (TBI) who underwent inpatient rehabilitation. It finds:
1) Increasing age was associated with improved outcomes in children but poorer outcomes in adults, as measured by Functional Independence Measure (FIM) scores.
2) Several factors like gender, Glasgow Coma Scale scores, and presence of midline shift differed between pediatric and adult groups and impacted functional outcomes.
3) The relationship between age and functional outcome after TBI differs between pediatric and adult populations, with moderating variables also having different effects between the two age groups.
ASPG 2005 07 -Parliamentary Oversight from the Parliament's Perspective- HydeJohn Hyde
The document summarizes the oversight of Western Australia's Corruption and Crime Commission (CCC) from the perspective of the parliamentary committee overseeing the CCC. Key points:
- The CCC effectively investigated and exposed corruption cases, gaining public trust. However, the Acting Commissioner tipped off a suspect under investigation, admitting to misconduct.
- The parliamentary committee, CCC, and Inspector handled this issue transparently and properly according to procedures. The committee released the Inspector's report publicly within days to avoid cover-up accusations.
- This case shows how oversight bodies should handle internal misconduct - through internal investigation and transparency to maintain public trust in anti-corruption efforts. It provides a template for accountability.
Western Copper and Gold Corporate Presentation July 2015PaceCreativeGroup
Western Copper and Gold Corporation is developing the Casino copper-gold mine in Canada's Yukon Territory. The Casino project has world-class mineral resources including copper reserves of 4.5 billion pounds and gold reserves of 8.9 million ounces. Feasibility studies show the mine can produce over 170 million pounds of copper and 266,000 ounces of gold annually for 22 years at low operating costs. With capital costs of $2.5 billion, the Casino mine has favorable economics compared to other emerging copper projects globally. Development is ongoing with permitting and further engineering.
This document provides information on various safety, spill cleanup, and mining products from Diamond Rubber including:
1) Urethane chocks of various sizes for mining operations and fueling vehicles.
2) Other mining chocks made of rubber, plastic, and foam.
3) Safety cones and vests for visibility.
4) A portable emergency spill kit for oil or fuel spills.
5) A non-sparking cleanup shovel for safe cleanup of spills.
Western Copper and Gold - Corporate Presentation April 2015 PaceCreativeGroup
Western Copper and Gold Corporation is developing the Casino copper-gold mine in Yukon, Canada. The Casino project has over 1 billion tonnes of mineral reserves containing 4.5 billion pounds of copper and 8.9 million ounces of gold. It is expected to produce 245 million pounds of copper and 399,000 ounces of gold annually over its 22-year mine life. Western Copper has advanced the project through permitting and engineering studies. It aims to finalize project financing in 2015 and begin construction in 2016-2017.
This document provides an overview of the various services offered by CIPFA (the Chartered Institute of Public Finance and Accountancy) to support public sector finance practitioners and organizations. It summarizes CIPFA's advisory services, benchmarking services, conferences and training opportunities, statistical resources, and professional networks that are designed to help customers spend public money wisely and adapt to changing demands and legislation. CIPFA works across the public sector in the UK and offers tailored solutions to review practices, identify efficiencies, share best practices, and support members with specialized expertise and resources.
This document outlines a 14-day itinerary for a trip to western Canada planned by Rouler Travel. The itinerary includes 2 nights in Vancouver with activities like visiting Stanley Park, 2 nights in Whistler for skiing and snowboarding, 3 nights in Victoria for sightseeing downtown and visiting Butchart Gardens, and 3 nights in Tofino for whale watching, surfing, and visiting West Coast Safaris wildlife park. Transportation between locations includes flights, buses, and float planes. The document provides details of hotels, restaurants, and activities for each day. It recommends Rouler Travel for their expertise in planning enjoyable and stress-free itineraries.
Empowering Parliamentarians to break the Cycle of Corruption-opinion piece He...John Hyde
1) Members of Parliament play a key role in combating corruption through legislation and advocacy. They can establish independent anti-corruption agencies and advocate for anti-corruption practices within government and society.
2) However, simply talking about anti-corruption is not enough; MPs must genuinely believe in it and take ownership of implementation. They should ensure anti-corruption agencies are properly funded and independent.
3) Differences in anti-corruption agencies across Australian states can be traced to the priorities of establishing parliamentarians. The effectiveness of an agency starts with its enabling legislation.
CIPFA is a professional body for accountants that specializes in public services. It provides qualifications, training, guidance and support to public finance professionals around the world. CIPFA works to improve standards of public financial management and governance globally. It helps countries develop their public financial systems and provides tools to help public organizations implement best practices during times of austerity or change. CIPFA also publishes materials, runs events and offers advice to support public sector professionals in their roles.
Reg Erhardt Library, SAIT Polytechnic. Learn how to effectively organize, record, store, and back up the valuable information generated in your research process. Tools such as data management plans, Evernote, Scrivener, and Google Drive will be reviewed.
The document discusses an algorithm for discovering patterns (motifs) in time series data. It proposes a modification to an existing motif discovery algorithm to improve performance by eliminating the influence of adjacent subsequences. The modified algorithm is compared to the original algorithm based on metrics like number of motifs discovered, execution time, and mean distance of motifs from the main pattern (1-motif), with the modification showing same or better performance in all cases tested.
This document presents a new forecasting model that combines fuzzy time series and automatic clustering techniques to forecast gasoline prices in Vietnam. The model first uses an automatic clustering algorithm to divide historical gasoline price data into clusters with varying interval lengths. It then fuzzifies the data based on the new intervals to determine fuzzy logical relationships and forecasted values. The model is applied to a dataset of gasoline prices in Vietnam. Results show the proposed model achieves higher forecasting accuracy than a first-order fuzzy time series model.
Combination of Similarity Measures for Time Series Classification using Genet...Deepti Dohare
In this work, we use genetic algorithms to combine the similarity measures so as to get the best performance. The weightage given to different similarity measures evolves over a number of generations so as to get the best combination. We test our approach on a number of benchmark time series datasets and presented promising results.
Efficiency of recurrent neural networks for seasonal trended time series mode...IJECEIAES
Seasonal time series with trends are the most common data sets used in forecasting. This work focuses on the automatic processing of a
non-pre-processed time series by studying the efficiency of recurrent neural networks (RNN), in particular both long short-term memory (LSTM), and bidirectional long short-term memory (Bi-LSTM) extensions, for modelling seasonal time series with trend. For this purpose, we are interested in the learning stability of the established systems using the mean average percentage error (MAPE) as a measure. Both simulated and real data were examined, and we have found a positive correlation between the signal period and the system input vector length for stable and relatively efficient learning. We also examined the white noise impact on the learning performance.
Proposing a scheduling algorithm to balance the time and cost using a genetic...Editor IJCATR
This summary provides the key details from the document in 3 sentences:
The document proposes a genetic algorithm approach combined with a local search algorithm inspired by binary gravitational attraction to solve scheduling problems in grid computing. The algorithm aims to minimize task completion time and costs by optimizing resource selection and load balancing. Experimental results showed that the proposed algorithm achieved better optimization of time and costs and selection of resources compared to other algorithms.
Financial Time Series Analysis Based On Normalized Mutual Information FunctionsIJCI JOURNAL
A method of predictability analysis of future values of financial time series is described. The method is based on normalized mutual information functions. In the analysis, the use of these functions allowed to refuse any restrictions on the distributions of the parameters and on the correlations between parameters. A comparative analysis of the predictability of financial time series of Tel Aviv 25 stock exchange has been carried out.
Modeling, simulation & dynamic analysis of four bar planarIAEME Publication
This document discusses modeling, simulating, and analyzing the dynamic forces of a four-bar planar mechanism using CATIA V5 software. The document begins with an introduction to four-bar mechanisms and their importance. It then describes the mathematical modeling of displacement, velocity, and acceleration analysis of four-bar linkages. Next, it explains how to model a four-bar mechanism using different CATIA tools. The document presents results of the simulation in CATIA including graphs of link angle, speed, and acceleration over time. It concludes that CATIA allows simulation of link motion at different positions and validation of analytical equations, providing a valuable tool for mechanism analysis and design optimization.
This document discusses using particle swarm optimization (PSO) to design optimal close-range photogrammetry networks. PSO is introduced as a heuristic optimization algorithm inspired by bird flocking behavior that can be used to solve complex optimization problems. The document then provides an overview of close-range photogrammetry network design and the four design stages. It explains that PSO will be used to optimize the first stage of determining optimal camera station positions. Mathematical models of PSO for close-range photogrammetry network design are developed. Experimental tests are carried out to develop a PSO algorithm that can determine optimum camera positions and evaluate the accuracy of the developed network.
Time Series Forecasting Using Novel Feature Extraction Algorithm and Multilay...Editor IJCATR
Time series forecasting is important because it can often provide the foundation for decision making in a large variety of fields. A tree-ensemble method, referred to as time series forest (TSF), is proposed for time series classification. The approach is based on the concept of data series envelopes and essential attributes generated by a multilayer neural network... These claims are further investigated by applying statistical tests. With the results presented in this article and results from related investigations that are considered as well, we want to support practitioners or scholars in answering the following question: Which measure should be looked at first if accuracy is the most important criterion, if an application is time-critical, or if a compromise is needed? In this paper demonstrated feature extraction by novel method can improvement in time series data forecasting process
This document provides an overview of time series analysis and cross-sectional analysis. It defines both approaches and discusses their goals, types, components, techniques, and advantages/disadvantages. For time series analysis, it describes trends, seasonality, cycles, and irregular variations as the main components. Common techniques mentioned include Box-Jenkins ARIMA models and Holt-Winters exponential smoothing. Advantages include the ability to study trends over time, while disadvantages relate to issues like missing data, measurement error, and changing patterns. The document then covers cross-sectional analysis and provides a comparison of the two approaches.
A Review on the Comparison of Box Jenkins ARIMA and LSTM of Deep LearningYogeshIJTSRD
A time series is a set of events, sequentially calculated over time. Predicting the Time Series is mostly about predicting the future. The ability of a time series forecasting model is determined by its success in predicting the future. This is often at the expense of being able to explain why a specific prediction was made. The Box Jenkins model implies that the time series is stationary, and thus suggests differentiating non stationary series once or several times to obtain stationary effects. This generates a model for ARIMA, the I being the word for Integrated . The LSTM networks, comparable to computer memory, enforce a gated cell for storing information. Such as the previously mentioned networks, the LSTM cells also recognize when to make preceding time steps reads and writes information. Even though the work is new, it is obvious that LSTM architectures provide tremendous prospects as contenders for modeling and forecasting time series. The outcomes of the overall discrepancy in error indicate that in regards of both RMSE and MAE, the LSTM model tended to have greater predictive accuracy than the ARIMA model. Stavelin Abhinandithe K | Madhu B | Balasubramanian S | Sahana C "A Review on the Comparison of Box-Jenkins ARIMA and LSTM of Deep Learning" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-3 , April 2021, URL: https://www.ijtsrd.com/papers/ijtsrd39831.pdf Paper URL: https://www.ijtsrd.com/other-scientific-research-area/applied-mathamatics/39831/a-review-on-the-comparison-of-boxjenkins-arima-and-lstm-of-deep-learning/stavelin-abhinandithe-k
2-DOF BLOCK POLE PLACEMENT CONTROL APPLICATION TO:HAVE-DASH-IIBTT MISSILEZac Darcy
In a multivariable servomechanism design, it is required that the output vector tracks a certain reference
vector while satisfying some desired transient specifications, for this purpose a 2DOF control law
consisting of state feedback gain and feedforward scaling gain is proposed. The control law is designed
using block pole placement technique by assigning a set of desired Block poles in different canonical forms.
The resulting control is simulated for linearized model of the HAVE DASH II BTT missile; numerical
results are analyzed and compared in terms of transient response, gain magnitude, performance
robustness, stability robustness and tracking. The suitable structure for this case study is then selected.
COMPUTATIONAL COMPLEXITY COMPARISON OF MULTI-SENSOR SINGLE TARGET DATA FUSION...ijccmsjournal
Target tracking using observations from multiple sensors can achieve better estimation performance than a single sensor. The most famous estimation tool in target tracking is Kalman filter. There are several mathematical approaches to combine the observations of multiple sensors by use of Kalman filter. An important issue in applying a proper approach is computational complexity. In this paper, four data fusion algorithms based on Kalman filter are considered including three centralized and one decentralized methods. Using MATLAB, computational loads of these methods are compared while number of sensors increases. The results show that inverse covariance method has the best computational performance if the number of sensors is above 20. For a smaller number of sensors, other methods, especially group sensors, are more appropriate..
On Selection of Periodic Kernels Parameters in Time Series Prediction cscpconf
In the paper the analysis of the periodic kernels parameters is described. Periodic kernels can
be used for the prediction task, performed as the typical regression problem. On the basis of the
Periodic Kernel Estimator (PerKE) the prediction of real time series is performed. As periodic
kernels require the setting of their parameters it is necessary to analyse their influence on the
prediction quality. This paper describes an easy methodology of finding values of parameters of
periodic kernels. It is based on grid search. Two different error measures are taken into
consideration as the prediction qualities but lead to comparable results. The methodology was
tested on benchmark and real datasets and proved to give satisfactory results.
ON SELECTION OF PERIODIC KERNELS PARAMETERS IN TIME SERIES PREDICTIONcscpconf
In the paper the analysis of the periodic kernels parameters is described. Periodic kernels can
be used for the prediction task, performed as the typical regression problem. On the basis of the
Periodic Kernel Estimator (PerKE) the prediction of real time series is performed. As periodic
kernels require the setting of their parameters it is necessary to analyse their influence on the
prediction quality. This paper describes an easy methodology of finding values of parameters of
periodic kernels. It is based on grid search. Two different error measures are taken into
consideration as the prediction qualities but lead to comparable results. The methodology was
tested on benchmark and real datasets and proved to give satisfactory results.
This document proposes a new similarity measure for comparing spatial MDX queries in a spatial data warehouse to support spatial personalization approaches. The proposed similarity measure takes into account the topology, direction, and distance between the spatial objects referenced in the MDX queries. It defines the topological distance between spatial scenes referenced in queries based on a conceptual neighborhood graph. It also defines the directional distance between queries based on a graph of spatial directions and transformation costs. The similarity measure will be included in a recommendation approach the authors are developing to recommend relevant anticipated queries to users based on their previous queries.
A Combination of Wavelet Artificial Neural Networks Integrated with Bootstrap...IJERA Editor
In this paper, an iterative forecasting methodology for time series prediction that integrates wavelet de-noising
and decomposition with an Artificial Neural Network (ANN) and Bootstrap methods is put forward here.
Basically, a given time series to be forecasted is initially decomposed into trend and noise (wavelet) components
by using a wavelet de-noising algorithm. Both trend and noise components are then further decomposed by
means of a wavelet decomposition method producing orthonormal Wavelet Components (WCs) for each one.
Each WC is separately modelled through an ANN in order to provide both in-sample and out-of-sample
forecasts. At each time t, the respective forecasts of the WCs of the trend and noise components are simply
added to produce the in-sample and out-of-sample forecasts of the underlying time series. Finally, out-of-sample
predictive densities are empirically simulated by the Bootstrap sampler and the confidence intervals are then
yielded, considering some level of credibility. The proposed methodology, when applied to the well-known
Canadian lynx data that exhibit non-linearity and non-Gaussian properties, has outperformed other methods
traditionally used to forecast it.
This document provides an overview of techniques for temporal data mining. It discusses how temporal sequences arise in domains like engineering, science, finance, and healthcare. It covers approaches for representing temporal sequences, including keeping data in its original form, piecewise approximations, transformations to other domains, discretization, and generative models. It also discusses measuring similarity between sequences and techniques for classification and relation finding in temporal data, including frequent pattern detection and prediction. The goal is to classify and organize temporal data mining techniques to help practitioners address problems involving temporal data.
450595389ITLS5200INDIVIDUALASSIGNMENTS22015Leonard Ong
- The document discusses applying time series analysis to forecast container throughput at Sydney Ports terminals.
- It examines BITRE data on total container throughput from 2006-2014, finding seasonality. Nine forecasting methods are applied including simple and weighted moving averages, exponential smoothing, and methods accounting for trends and seasonality.
- The Holt-Winters method produced the most accurate forecast for September 2014 throughput, based on having the lowest errors according to MAD, MSE, and MAPE calculations.
SVD BASED LATENT SEMANTIC INDEXING WITH USE OF THE GPU COMPUTATIONSijscmcj
The purpose of this article is to determine the usefulness of the Graphics Processing Unit (GPU) calculations used to implement the Latent Semantic Indexing (LSI) reduction of the TERM-BY DOCUMENT matrix. Considered reduction of the matrix is based on the use of the SVD (Singular Value Decomposition) decomposition. A high computational complexity of the SVD decomposition - O(n3), causes that a reduction of a large indexing structure is a difficult task. In this article there is a comparison of the time complexity and accuracy of the algorithms implemented for two different environments. The first environment is associated with the CPU and MATLAB R2011a. The second environment is related to graphics processors and the CULA library. The calculations were carried out on generally available benchmark matrices, which were combined to achieve the resulting matrix of high size. For both considered environments computations were performed for double and single precision data.
SVD BASED LATENT SEMANTIC INDEXING WITH USE OF THE GPU COMPUTATIONS
esmaeili-2016-ijca-911399
1. International Journal of Computer Applications (0975 – 8887)
Volume 149 – No.5, September 2016
16
FESM: An Analytical Framework for Elastic Similarity
Measures Based Time Series Pattern Recognition
Nafas Esmaeili
Young Researchers and Elite Club,
Qazvin Branch, Islamic
Azad University, Qazvin, Iran
Karim Faez
Department of
Electrical Engineering,
Amirkabir University of Technology,
Tehran, Iran
ABSTRACT
Compare the similarity of time series is a key for most tasks
and there are various similarity measures which measure the
similarity of time series. Similarity measures are the basis of
time series research, they are quite important for improving
the efficiency and accuracy of the time series pattern
recognition tasks. Therefore selection the best similarity
measures are very essential. On this issue, in this paper an
analytical framework for elastic similarity measures based
time series pattern recognition as, FESM for short, is
proposed. FESM consists of three main components: 1)
Classification of elastic similarity measures of time series, 2)
Comparative evaluation of classified similarity measures
based on proposed qualitative evaluation criteria, and 3)
Application scopes of classified similarity measures. FESM
will be proper for the quick understanding and comparing of
time series similarity measures, and selection the best of
existing similarity measures for respective time series pattern
recognition tasks.
General Terms
Pattern Recognition
Keywords
Time Series, Pattern Recognition, Elastic Similarity
Measures.
1. INTRODUCTION
Time series pattern recognition have a broad range of real-
world applications such as astronomy, stock-market,
phenology, video mining, energy and power, and many other
application scopes. There are a huge amount of data in these
scopes and the effect of similarity measures for pattern
recognition in correlated data such as time series in these
applications scopes is an important factor which should be
considered. In astronomy [1], producing time series from
millions of sky objects is done for classify stars and other
phenomena into one of the known types of variability, and the
huge discovery space that remains to be explored for a vast
number of new unknown discoveries in astronomy domain,
the similarity measures between different astronomy time
series are used for their learning and pattern recognition tasks.
In stock-market [2], there is the challenge of increasing
continuous stock-market data with unexpected extrema over
time, the similarity measure between stock-market time series
data, can be used as input for hierarchical clustering
algorithms in order to handle the challenge of increasing
Stock-market time series data. In plant phenology study [3],
selection the best similarity measure for classification plant
species and providing measures to estimate the change in
phenological events, such as loss and discoloration of leaves
is an important object. The video mining [4] is one of the
active research areas which is related to many application
domains, such as semantic indexing and retrieval to intelligent
video surveillance, and various approaches to detect video
events have been proposed in the literature, the objective of
video mining is to discover and describe interesting patterns
from the huge amount of video data. There are structural
patterns in recurrence plots which can be used to determine
the similarity between two video sequences, which is
necessary for classification, so measuring the similarity
between two recurrence plots, needed for video classification
in order to event detection in video sequences. In [5] the CK-1
distance used to measure the similarity between unthresholded
recurrence plots that were generated from time series and
results show the combination of the CK-1 distance measure
together with unthresholded recurrence plots results in higher
classification accuracy for time series which represent the
shape. Related to energy and power [6], check the effect of
similarity measures is a necessary step for the optimized
design and development of efficient clustering based models,
predictors and controllers of time dependent processes such as
building energy consumption patterns. Figure 1 shows the
similarity measures based time series pattern recognition.
All above mentioned application scopes show that one of the
main challenges of time series pattern recognition is selecting
a fit measure of similarity or distance between time series [7],
so selection the best similarity measure in order to improve
the efficiency and accuracy of the time series pattern
recognition tasks is a key and very essential.
The lock step similarity measures are simple and very
intuitive for time series data, they have a known weakness of
sensitivity to distortion in the time axis. These measures do
not match any of the types of robustness [8] so in order to
overcome this weakness many elastic measures were
proposed. They can generally handle the problems of lock
step similarity measures [1].
The elastic similarity measures have elastic steps in their
measuring, and work with time series with different lengths,
because of these flexibility, they are widely used in science,
2. International Journal of Computer Applications (0975 – 8887)
Volume 149 – No.5, September 2016
17
Fig 1: Similarity measures based time series pattern recognition
medicine, industry, finance and many other areas.
On the issue of elastic similarity measure, in this paper an
analytical framework for elastic similarity measure based time
series pattern recognition as, FESM for short, is proposed.
The proposed framework consists of three main components:
1) Classification of elastic similarity measures of time series,
2) Comparative evaluation of classified similarity measures
based on proposed qualitative evaluation criteria, and 3)
Application scopes of classified similarity measures. FESM
will be proper for the quick and simple understanding of time
series similarity measures, and selection the best of them fit
the need.
The rest of the paper is outlined as follows: Section 2
describes the related concepts, Section 3 presents our
proposed framework and Section 4 is devoted to the
conclusion and future directions.
2. RELATED CONCEPTS
In this section, some of the important related concepts will be
covered, it will begin by introducing the key definitions as
follow:
Definition 1 Time Series. A time series T = (t1, . . . , tn), ti∈ R
is an ordered sequence of n real-valued variables [8], it is a set
of measurements arranged in order either by time or spatial
location [9].
Definition 2 Subsequence. Given a time series T of length m,
a subsequence S of T is a sampling of length l ≤ m of
contiguous positions from T, that is, S = tp, . . . , tp+l−1, for 1 ≤
p ≤ m − l + 1 [10].
Definition 3 Similarity Measure. The similarity measure Dist
(Ta, Tb), the distance between two time series Ta and Tb,
which Dist is a function which gets two time series as inputs
and returning the distance d between these two series Ta and
Tb. The distance between time series needs to be carefully
defined in order to reflect the underlying similarity of such
data, this is particularly desirable for segmentation,
classification, clustering, similarity-based retrieval and other
mining procedures of time series [11].
Definition 4 Subsequence similarity measure. The
subsequence similarity measure Dsubseq (T, S) is defined as
Dsubseq (T, S) = min (D (T, S
/
) for S
/
∈S
||T
s . It represents the
distance between T and its best matching location in S [8].
Definition 5 Metric distance: The distance between two time
series Ta and Tb is a metric if Dist (Ta, Tb)>0 and Dist (Ta, Tb)
= Dist (Tb, Ta), and there is triangle inequality as Dist (Ta, Tc)
≤ Dist (Ta, Tb) + Dist (Tb, Tc) [1]. More formally, a metric is a
function that behaves according to a specific set of conditions
(non-negativity, identity of indiscernible, symmetry, triangle
inequality) [11].
Definition 6 Elastic similarity measures. Elastic similarity
measures are the similarity measures which have the data
adaptation in conditions such as offset translation and
amplitude scaling.
Definition 7 Outliers and Noise. These two definitions are
different from together. Noise is random error or variance
which should be removed. Outliers violate the mechanism that
generates the normal data and should be detection. In order to
detection the outlier firstly the noise should be removed.
Definition 8 Global constraints. All elastic similarity
measures are based on dynamic programming with quadratic
computational complexity, the Sakoe-Chiba band, and the
Itakura parallelogram are two global constraints which use for
significantly speed up the calculation of similarities and
improve the accuracy of classification [20].
3. FESM: PROPOSED FRAMEWORK
In this section of the paper, an analytical framework for
elastic similarity measures based pattern recognition as,
FESM for short, is proposed. FESM consists of three main
components: 1) Classification of elastic similarity measures of
time series, 2) Comparative evaluation of classified similarity
measures based on proposed qualitative evaluation criteria,
and 3) Application scopes of classified similarity measures.
Details will be described.
3.1 The First Component of FESM:
Classification
The first component of FESM is a classification of elastic
similarity measures of time series. Based on this classification
there are two main approaches as: 1) Elastic similarity
measures based on the Lp norms, and 2) Elastic similarity
measures based on the matching threshold. These two main
approaches have different types, which are shown in Figure 2.
Let Ta and Tb be two time series with m and n data points
respectively, by given Ta and Tb, in the following details of
this classification will be described.
3. International Journal of Computer Applications (0975 – 8887)
Volume 149 – No.5, September 2016
18
Fig 2: Classification of elastic similarity measures of time series
3.1.1 Elastic Similarity Measures Based on the Lp
Norms
The first approach includes elastic similarity measures based
on the Lp norms as Manhattan Distance (L1) [12] and
Euclidean Distance (L2) [13]. This approach represents
similarity measures which satisfy the triangle inequality and
are metric distances. Two main types of similarity measures
are considered in this approach as 1) Dynamic Time Warping
(DTW) [14] and 2) Edit Distance with Real Penalty (ERP)
[22]. An illustration of an elastic similarity measures based on
the Lp norms is shown in Figure 3.
Fig 3: An illustration of an elastic similarity measures
based on the Lp norms
• Dynamic Time Warping (DTW)
DTW searches for the best alignment between two time series
by attempting to minimize the distance between them. DTW
is a technique for effectively achieving the warping [17]. It
computes the similarity of time series by finding the optimal
warping path in the matrix of distances between points of the
two series Ta and Tb with m and n data points respectively,
DTW by zero value of the global constraint illustrate L2.
DTW defined by equation (1) as follows [14];
DistDTW (Ta, Tb) = DTW (m, n) (1)
0, if i = 0, j = 0
∞, if i = 0
DTW (i, j) = ∞, if j = 0
dist (Ta [i], Tb[j]) + min {DTW (i – 1, j –
1), DTW (i – 1, j),
DTW (i, j – 1)}, Otherwise
Edit Distance with Real Penalty (ERP)
ERP introduces a constant value g as the gap of the edit
distance and uses L1 distance between elements as the penalty
to handle local time shifting [22]. ERP have their roots in the
classic string edit distance. If the distance between two points
is too large, ERP simply uses the distance value between one
of those points and the reference point [7]. ERP defined by
equation (2) as follows;
(2)
n
i
dist
1
( Tb[i], g), if m = 0
m
i
dist
1
( Ta[i], g), if n = 0
ERP (Ta, Tb ) = min {ERP (Rest (Ta), Rest (Tb)) + dist
(Ta[1], Tb[1]),
ERP (Rest (Ta), Tb) + dist (Ta[1], g)
ERP (Ta, Rest (Tb)) + dist (Tb[1], g)},
Otherwise
Where dist (a, b) is the distance between two elements and g
is a gap of edit distance. The real distance between elements
(dist (Ta[1], Tb[1])) as the penalty used to handle local time
shifting.
3.1.2 Elastic Similarity Measures Based on the
Matching Threshold
The second approach includes elastic similarity measure
based on the matching threshold. This approach represents
similarity measures which cannot satisfy the triangle
inequality and are not metric distances. Two main types of
similarity measures are considered in this approach as: 1)
Longest Common Subsequence (LCSS) [19] and 2) and the
Edit Distance on Real Sequence (EDR) [23]. An illustration of
an elastic similarity measures based on the on the matching
threshold is shown in Figure 4.
4. International Journal of Computer Applications (0975 – 8887)
Volume 149 – No.5, September 2016
19
Fig 4: An illustration of an elastic similarity measures
based on the matching threshold
• Longest Common SubSequence (LCSS)
In LCSS, the similarity between two time series is expressed
as the length of the longest common subsequence of both time
series [19], to adapt the concept of matching characters in the
settings of time series, a threshold parameter ε was
introduced, stating that two points from two time series are
considered to match if their distance is less than ε [18]. LCSS
uses a threshold parameter ε for point matching and a warping
threshold δ. It is defined by equation (3) as follows [19, 20];
DistLCSS (Ta, Tb) = 1 –
m)min(n,
j)LCSS(i,
(3)
0, if i = 0
0, if j = 0
LCSS (i, j) = 1+ LCSS (i – 1, j – 1), if |Ta [i] – Tb[i]| ≤ ε
max {LCSS (i – 1, j), LCSS (i, j – 1)},
Otherwise
• Edit Distance on Real sequence (EDR)
EDR is another elastic similarity measure based on the
matching threshold, it uses a threshold parameter and assigns
penalties to gaps between two similar subsequences of time
series according to the lengths of the gaps. In EDR, the
matching threshold reduces effects of noise by quantizing the
distance between a pair of elements to two values, 0 and 1,
and reduce the effect of outliers. It’s defined by equation (4)
as follows [23];
n, if m = 0 (4)
m, if n = 0
EDR (Ta, Tb) = EDR (i – 1, j – 1), if |Ta [i] – Tb[i]| ≤ ε
min {EDR (Rest (Ta), Rest (Tb)) +
subcost, EDR (Rest (Ta), Tb) + 1,
EDR (Ta, Rest (Tb)) + 1}, Otherwise
Where subcost = 0, if match (Ta[1], Tb[1]) = true and subcost
= 1, otherwise. Rest (T) stands for the time series obtained
from T by eliminating the first element.
3.2 The Second Component of FESM:
Comparative Evaluation of Classified
Elastic Similarity Measures Based on
Proposed Qualitative Evaluation
Criteria
This section describes the second component of FESM, which
will propose the comparative evaluation of classified
similarity measures based on qualitative evaluation criteria.
Firstly the qualitative evaluation criteria with their ranking
will be proposed and then analyse details of comparative
evaluation of classified similarity measures based on proposed
qualitative evaluation criteria will be described. The results
are shown in Table 1.
3.2.1 Proposed Qualitative Evaluation Criteria
• Metric: If the intended similarity measure satisfies
the triangle inequality it is a metric, else it is nonmetric.
Ranking is Yes and No.
• Noise handling: Refers to robust of the intended
similarity measure in handling noise. Ranking is Yes and
No.
• Outliers handling: Refers to robust of the intended
similarity measure in handling outliers. Ranking is Yes
and No.
• Performance: Refers to compare the classifier average
error rates with different elastic similarity measures of
our proposed framework. It is based on statistically
significant wins and losses counts for the 1NN classifier
with different similarity measures, across many different
data sets using the Wilcoxon sign-rank test [20]. Ranking
is High, Low and Average.
• Global constraints effects: Refers to effects of global
constraints on different elastic similarity measures. Some
elastic similarity measure are most sensitive to the global
constraints so have maximum effect, while others have
different behaves sensitively and effects. Ranking is
Maximum, Medium and Minimum.
• Data points mapping: Refers to allowing the feasibility of
mapping of the data points. Ranking is one-to-one, one-
to-many and one-to-none.
3.2.2 Comparative Evaluation of Classified Elastic
Similarity Measures Based on Proposed
Qualitative Evaluation Criteria
• Dynamic Time Warping
DTW is an elastic metric similarity measure which can give a
satisfactory result on the different length of both input time
series with short lengths. In DTW, all elements from both
sequences must be used, even the outliers [17], so it is
sensitive to the noise and outlier and cannot handle both of
them. Experimental results in [20] show that DTW has the
most sensitive to the Sakoe-Chiba band global constraints
regarding the 1NN graph, and has the best performance in all
elastic similarity measure. This distance measures allow
comparison of one-to-one and one-to-many points.
• Edit Distance with Real Penalty
ERP is an elastic metric similarity measure. It is sensitive to
the noise and cannot handle noises, but it is robust to outliers.
ERP measure has intermedia behavior to the Sakoe-Chiba
band global constraints regarding the 1NN graph, and its
performance as the generally is worst [20]. This distance
measures allow comparison of one-to-one, one-to-many and
one-to-none points.
• Longest Common SubSequence
LCSS is an elastic nonmetric similarity measure with
subsequence matching which solves the problem of the
presence of noise by taking into account only sufficiently
similar points [19], also some elements such as outliers may
5. International Journal of Computer Applications (0975 – 8887)
Volume 149 – No.5, September 2016
20
be unmatched or left out [17], so LCSS can handle both of the
noise and outliers. Experimental results in [20] show that
LCSS measure have intermedia behavior to the application of
the Sakoe-Chiba band, and its performance is average. LCSS
can compare the one-to-one, one-to-many and one-to-none
points.
• Edit Distance on Real sequence
EDR is elastic nonmetric similarity measure, same to LCSS
both of the noise and outliers can be handled by the threshold
setting. The application of the Sakoe-Chiba band exerts the
lowest influence on EDR, it has average error rate across the
different data sets so has average performance [20]. EDR can
compare the one-to-one, one-to-many and one-to-none points.
Table 1. Comparative evaluation of classified similarity measures based on evaluation criteria
Metric
Distance
Noise
Handling
Outliers
Handling
Global
Constraints
Effects
Performance Data Points
Mapping
FESM
Similaritymeasuresbasedonthe
Lpnorms
DynamicTime
Warping
Yes No No Maximum
High one-to-one
one-to-many
EditDistancewith
RealPenalty
Yes No Yes Medium Low
one-to-one
one-to-many
one-to-none
Similaritymeasuresbasedonthe
matchingthreshold
LongestCommon
SubSequence
No Yes Yes Medium Average
one-to-one
one-to-many
one-to-none
EditDistanceon
Realsequence
No Yes Yes Minimum Average
one-to-one
one-to-many
one-to-none
3.3 The Third Component of FESM:
Application Scopes of the Classified
Similarity Measure
This section describes the third component of FESM, there are
several time series applications in diverse domains which
increasingly mining with different similarity measures. Here
the proper similarity measures which applied in some of these
applications will be shown, these applications come from
different domains. A good choice of similarity measure has
strong results in these applications, the analysis details are as
follows;
3.3.1 Application Scope of the Elastic Similarity
Measures Based on the Lp Norms: Trajectory
Analysis
The similarity measures in the first approach of proposed
classifications have the metric characteristic, as mentioned in
section 3.2 the metric distance criteria satisfies the triangle
inequality. The triangle inequality is an efficient way to apply
pruning strategies [22]. The trajectory data have numerous
potential applications in traffic control, flight data, urban
planning, astronomy, transportation systems and animal
science. For large trajectory databases, it is important to
6. International Journal of Computer Applications (0975 – 8887)
Volume 149 – No.5, September 2016
21
minimize the computation of the distance between trajectories
in the database [27].
Application 1: DTW in flight systems
DTW is one of the most commonly used distance measures
for trajectory data. The basic idea behind DTW distance is to
find out the warping path between two trajectories that
minimizes the warping cost [26].
An automated technique for clustering flight and trajectory
data using a Particle Swarm Optimization (PSO) approach has
been proposed in [15], which considered the DTW distance as
one of the most commonly used metric distance measures for
flight trajectory data, the proposed technique is able to find
(near) optimal number of clusters as well as (near) optimal
cluster centers during the clustering process and reduce the
dimensionality of the search space and improve the
performance.
Application 2: ERP in transportation systems
Early detection of the outlier is critical for monitoring and
managing the condition of railway transportation systems.
Railway transportation requires railway systems to be fault
tolerant and safe. Failures of railway point systems often lead
to service delays or hazardous situations. The condition
monitoring system used to detect the early signs of the
deteriorated condition of railway point systems for
investigation of anomalies and prevent failures.
A methodology for early warning of possible point failure
through early detection of changes in the current drawn by the
point motor proposed in [28], which use the one class support
vector machine classification method with the similarity
measure of ERP taking into account specific features of the
data in railway transportation systems field. It is able to detect
the changes in the measurements of the current of the point
operating equipment with greater accuracy compared with the
commonly used threshold based technique.
3.3.2 Application Scope of the Elastic Similarity
Measures Based on the Matching Threshold:
Image and Video Analysis
The image and video analysis include a closely related broad
fields such as image pattern recognition, image and video
retrieval and authentication, image and video classification,
video surveillance, video event recognition, video tracking,
computer vision, machine vision, etc.
The similarity measures in the second approach of proposed
classifications have threshold setting characteristic which can
handle noise and outliers properly. One of a major challenge
in these fields is the detection of abnormal events and noise
handling, so the second approach similarity measures of our
framework classifications can be used on this issue.
Application 1: LCSS in video surveillance
Detection of abnormal events in video surveillance systems is
based on the analysis of the trajectories of moving objects in a
controlled scene. The comparative experimental results in [25]
demonstrate that the LCSS is the most accurate and efficient
for the clustering task even in the case of different sampling
rates and noise in the four distances widely used as
trajectories’ similarity measure. Based on their generic
adopted process for an abnormal event detection, firstly
normal/abnormal clusters from saved trajectories through an
unsupervised clustering algorithm has been extracted and
then, a new detected trajectory considered and classify as
either normal or abnormal.
The similarity between trajectories is a critical step while
analyzing trajectories since it affects the quality of further
applications such as clustering and classification.
Application 2: EDR in computer vision
There is a need for simulation of computer vision systems
applied to crowd monitoring. Simulation the most important
aspects of crowds for performance analysis of computer based
video surveillance systems is done in [29].
The optimized crowd simulation algorithms can be utilized in
computer vision research, for example, provide predictions of
pedestrian locations in multi people tracking tasks, in [24]
refine crowd simulation algorithms by optimizing their
parameters based on EDR has been done and the result
demonstrates that this approach significantly reduces the
distance between the simulated trajectories of individuals and
the trajectories extracted from real video.
4. CONCLUSION AND FUTURE
DIRECTIONS
In this paper, an analytical framework for elastic similarity
measures based time series pattern recognition as, FESM for
short, was developed. FESM consists of three main
components: 1) Classification of elastic similarity measures of
time series, 2) Comparative evaluation of classified similarity
measures based on proposed qualitative evaluation criteria,
and 3) Application scopes of classified similarity measures.
Two main approaches were presented in the first component
as: 1) Elastic similarity measures based on the Lp norms, and
2) Elastic similarity measures based on the matching
threshold. These two main approaches had different types,
which were described in details. The second components of
FESM was done the comparative evaluation of classified
similarity measures based on proposed qualitative evaluation
criteria and finally in the third component, the application
scopes of classified similarity measures were demonstrated.
The outcomes of this research will help to researchers for the
quick understanding and compare of time series similarity
measures, and selection the best of existing similarity
measures fits the need.
Many researchers have proposed hybrid methods which use
multiple similarity measures. Hybrid methods have improved
the performance of existing similarity measures in various
application scopes. In the future work, a framework for novel
hybrid similarity measures which have proposed in recent
years, will be proposed.
5. REFERENCES
[1] Lin, J., Williamson, S., Borne, K., DeBarr, D. 2012.
Pattern recognition in time series. Advances in Machine
Learning and Data Mining for Astronomy. Eds. Kamal,
A., Srivastava, A., Way, M., and Scargle, J. Chapman &
Hall. To Appear.
[2] Cerioli, A., Laurini, F., and Corbellini, A. 2005.
Functional cluster analysis of financial time series. In
New Developments in Classi_cation and Data Analysis,
eds. Vichi, M., Monari, P., Mignani, S., Montanari, A.
Springer-Verlag, Berlin, pages 333-341.
[3] Conti, J.C., Farial, F.A., Almeida, J., Alberton, B.,
Morellato, P. C. L., Camolesi, L., Torres, R. 2014.
Evaluation of Time Series Distance Functions in the
Task of Detecting Remote Phenology Patterns. ICPR
2014: 3126-3131
7. International Journal of Computer Applications (0975 – 8887)
Volume 149 – No.5, September 2016
22
[4] Vijayakumar, V., Nedunchezhian, R. 2012. A study on
video data mining. IJMIR 1(3): 153-172
[5] Silva, D. F., Souza, V., De, M., Batista, G. E. 2013. Time
series classification using compression distance of
recurrence plots. In Data Mining (ICDM), 2013 IEEE
13th International Conference on, pages 687–696. IEEE.
[6] Iglesias, F., Kastner, W. 2013. Analysis of Similarity
Measures in Times Series Clustering for the Discovery of
Building Energy Patterns, Energies, vol. 6, no. 2, pp.
579–597.
[7] Wang, X., Mueen, A., Ding, H., Trajcevski, G.,
Scheuermann, P., Keogh, E. 2012. Experimental
comparison of representation methods and distance
measures for time series data. Data Min. Knowl. Disc.
[8] Esling, P., Agón, C. 2012. Time-series data mining.
ACM Comput. Surv. 45(1):12.
[9] Shams, MB., Haji, S., Salman, A., Abdali, H., Alsaffar,
A. 2016. Time series analysis of Bahrain's first hybrid
renewable energy system.Energy 103, 1-15.
[10] Ye, L., Keogh, E. 2011. Time series shapelets: a novel
technique that allows accurate, interpretable and fast
classification Data Min Knowl Disc.Volume 22, Issue 1,
pp 149–182.
[11] Spiegel, S. 2015. Time series distance measures.
Segmentation, classification, and clustering of temporal
data. Doctoral Thesis.
http://dx.doi.org/10.14279/depositonce-4619.
[12] Yi, B.-K. Faloutsos, C. 2000. Fast time sequence
indexing for arbitrary lp norms. International Conference
on Very Large Data Bases, pp. 385–394.
[13] Fallouts, C., Ranganathan, M., Manolopoulos, Y.(1994).
Fast subsequence matching in time-series databases. In
proceedings of the ACM SIGMOD Int’l Conference on
Management of Data.pp 419-429.
[14] Berndt, D. J., Clifford, J. 1994. Using Dynamic Time
Warping to Find Patterns in Time Series, in KDD
Workshop, 1994, pp. 359–370.
[15] Izakian, Z., Mesgari, MS., Abraham, A. 2016. Clustering
of trajectory data using a particle swarm optimization
Computers, Environment and Urban Systems-
Elsevier.Volume 55.Pages 55–65.
[16] Keogh, E. 2002. Exact Indexing of Dynamic Time
Warping. In Proceedings of the 28th international
Conference on Very Large Data Bases. Hong Kong,
China.
[17] Ratanamahatana, CA., Lin, J., Gunopulos, D., Keogh, E.,
Vlachos M, Das G. 2005. Mining time series data. Data
Mining and Knowledge Discovery Handbook, Maimon
O, Rokach L (eds.).Springer: Berlin.ISBN: 978-0-387-
24435-8.
[18] Ding, H., Trajcevski, G., Scheuermann, P., Wang, X., &
Keogh, E. 2008. Querying and mining of time series
data: Experimental comparison of representations and
distance measures. Proceeding of the VLDB
Endowment, 1(2), 1542–1552.
[19] Vlachos, M., Kollios, G., Gunopulos, D. 2002.
Discovering similar multidimensional trajectories. In:
Proceedings 18th International Conference on Data
Engineering. IEEE Comput. Soc, pp. 673–684.
[20] Kurbalija, V., Radovanovi´c, M., Geler, Z., Ivanovi´c, M.
2014. The influence of global constraints on similarity
measures for time-series databases. Knowledge-Based
Systems 56, 49–67.
[21] Vlachos, M., Hadjieleftheriou, M., Gunopulos, D.,
Keogh, E.J. 2006. Indexing Multidimensional Time-
Series. VLDB J., 15(1).
[22] Chen, L., Ng, R. 2004. On the marriage of lp-norms and
edit distance. pp. 792–803.
[23] Chen, L., Ozsu, M.T., Oria, V. 2005. Robust and fast
similarity search for moving object trajectories. In:
Proceedings of the 2005 ACM SIGMOD International
Conference on Management of Data. ACM, New York,
NY, USA, pp. 491–502.
[24] Jin, Z., Bhanu, B. 2013. Optimizing crowd simulation
based on real video data, in: IEEE International
Conference on Image Processing (ICIP), Melbourne,
VIC, Australia, pp. 3186–3190.
doi:10.1109/ICIP.2013.6738656.
[25] Bouarada Ghrab, N., Fendri, E., Hammami, M. 2016.
Clustering Based Abnormal Event Detection:
Experimental Comparison for Similarity Measures'
Efficiency. ICIAR .367-374.
[26] Zhang, Z., Huang, K., Tan, T. 2006. Comparison of
similarity measures for trajectory clustering in outdoor
surveillance scenes. Proceedings of the 18th International
Conference on Pattern Recognition (ICPR'06).1135-1138
[27] Abdelbar, M., Buehrer R.M. 2016. Improving Cellular
Positioning Indoors Through Trajectory Matching.
Conference:IEEE/ION Position, Location and
Navigation Symposium-
DOI:10.1109/PLANS.2016.7479705
[28] Vileiniskis, M., Remenyte-Prescott, R., Rama, D. 2015.
A fault detection method for railway point systems. Proc.
Inst. Mech. Eng. F J. Rail Rapid Transit.
[29] Andrade, E.L., Fisher, R.B. 2005. Simulation of crowd
problems for computer vision. In Proceedings First
International Workshop on Crowd Simulation, vol. 3.
pp.7
IJCATM : www.ijcaonline.org