This document summarizes a research paper that uses Singular Spectrum Analysis (SSA) to forecast electricity consumption in the Middle Province of Gaza Strip. SSA is a nonparametric time series analysis technique that decomposes a time series into independent components like trends, oscillations, and noise. The paper applies SSA to monthly electricity consumption data from the region. It finds that SSA outperforms exponential smoothing and ARIMA models in forecast accuracy according to error measures. The paper provides an overview of the mathematical methodology behind SSA and its application to electricity consumption forecasting.
A Singular Spectrum Analysis Technique to Electricity Consumption ForecastingIJERA Editor
Singular Spectrum Analysis (SSA) is a relatively new and powerful nonparametric tool for analyzing and forecasting economic data. SSA is capable of decomposing the main time series into independent components like trends, oscillatory manner and noise. This paper focuses on employing the performance of SSA approach to the monthly electricity consumption of the Middle Province in Gaza Strip\Palestine. The forecasting results are compared with the results of exponential smoothing state space (ETS) and ARIMA models. The three techniques do similarly well in forecasting process. However, SSA outperforms the ETS and ARIMA techniques according to forecasting error accuracy measures
SSA-based hybrid forecasting models and applicationsjournalBEEI
This study attempted to combine SSA (Singular Spectrum Analysis) with other methods to improve the performance of forecasting model for time series with a complex pattern. This work discussed two modifications of TLSAR (Two-Level Seasonal Autoregressive) modeling by considering the SSA decomposition results, namely TLSNN (Two-Level Seasonal Neural Network) and TLCSNN (Two-Level Complex Seasonal Neural Network). TLSAR consisted of a linear trend, harmonic, and autoregressive component. In contrast, the two proposed hybrid approaches consisted of flexible trend function, harmonic, and neural networks. Trend and harmonic function were considered as the deterministic part identified based on SSA decomposition. Meanwhile, NN was intended to handle the nonlinearity relationship in the stochastic part. These two SSA-based hybrid models were contemplated to be more flexible than TLSAR and more applicable to the series with an intricate pattern. The experimental studies to the monthly accidental deaths in USA and daily electricity load Jawa-Bali showed that the proposed SSA-based hybrid model reduced RMSE for the testing data from that obtained by TLSAR model up to 95%.
MFBLP Method Forecast for Regional Load Demand SystemCSCJournals
Load forecast plays an important role in planning and operation of a power system. The accuracy of the forecast value is necessary for economically efficient operation and also for effective control. This paper describes a method of modified forward backward linear predictor (MFBLP) for solving the regional load demand of New South Wales (NSW), Australia. The method is designed and simulated based on the actual load data of New South Wales, Australia. The accuracy of discussed method is obtained and comparison with previous methods is also reported.
This document summarizes a new technique for demand forecasting called exponentially smoothed regression analysis. The technique uses regression models to separately estimate trend and multiplicative seasonality in time series data. It draws on the power of regression analysis while allowing estimates to be smoothed over time like exponential smoothing methods. The technique estimates seasonal factors, deseasonalizes the data, then estimates trend and base demand through regression. This allows proper separation of trend and seasonality effects compared to other methods. The technique is computationally efficient and easy to implement.
Forecasting of electric consumption in a semiconductor plant using time serie...Alexander Decker
This document summarizes a study that used time series methods to forecast electricity consumption in a semiconductor plant. The study analyzed 36 months of historical electricity consumption data from 2010-2012 to select the best forecasting model. Single exponential smoothing was found to have the lowest Mean Absolute Percentage Error (MAPE) of 5.60% and was determined to be the best forecasting method. The selected model will be used to forecast future electricity consumption for the plant.
SIAM-AG21-Topological Persistence Machine of Phase TransitionHa Phuong
Presentation at SIAM Conference on Applied Algebraic Geometry (AG21), Aug. 2021.
Abstract. The study of phase transitions using data-driven approaches is challenging, especially when little prior knowledge of the system is available. Topological data analysis is an emerging framework for characterizing the shape of data and has recently achieved success in detecting structural transitions in material science, such as the glass--liquid transition. However, data obtained from physical states may not have explicit shapes as structural materials. We thus propose a general framework, termed “topological persistence machine," to construct the shape of data from correlations in states so that we can subsequently decipher phase transitions via qualitative changes in the shape. Our framework enables an effective and unified approach in phase transition analysis without having prior knowledge about phases or requiring the investigation of the system with large size. We demonstrate the efficacy of the approach in terms of detecting the Berezinskii--Kosterlitz--Thouless phase transition in the classical XY model and quantum phase transitions in the transverse Ising and Bose--Hubbard models. Interestingly, while these phase transitions have proven to be notoriously difficult to analyze using traditional methods, they can be characterized through our framework without requiring prior knowledge of the phases. Our approach is thus expected to be widely applicable and will provide the prospective with practical interests in exploring the phases of experimental physical systems.
This document presents a new forecasting model that combines fuzzy time series and automatic clustering techniques to forecast gasoline prices in Vietnam. The model first uses an automatic clustering algorithm to divide historical gasoline price data into clusters with varying interval lengths. It then fuzzifies the data based on the new intervals to determine fuzzy logical relationships and forecasted values. The model is applied to a dataset of gasoline prices in Vietnam. Results show the proposed model achieves higher forecasting accuracy than a first-order fuzzy time series model.
This document presents an overview of independent component analysis (ICA). It begins with notation for factor analysis and ICA models, then discusses assumptions and challenges of ICA including rotation ambiguity and non-Gaussianity. Methods for estimating the ICA model including measuring non-Gaussianity via kurtosis, entropy, and negentropy are described. A direct approach to ICA called product density ICA is outlined along with its optimization algorithm. Finally, an example application to handwritten digit images is briefly discussed.
A Singular Spectrum Analysis Technique to Electricity Consumption ForecastingIJERA Editor
Singular Spectrum Analysis (SSA) is a relatively new and powerful nonparametric tool for analyzing and forecasting economic data. SSA is capable of decomposing the main time series into independent components like trends, oscillatory manner and noise. This paper focuses on employing the performance of SSA approach to the monthly electricity consumption of the Middle Province in Gaza Strip\Palestine. The forecasting results are compared with the results of exponential smoothing state space (ETS) and ARIMA models. The three techniques do similarly well in forecasting process. However, SSA outperforms the ETS and ARIMA techniques according to forecasting error accuracy measures
SSA-based hybrid forecasting models and applicationsjournalBEEI
This study attempted to combine SSA (Singular Spectrum Analysis) with other methods to improve the performance of forecasting model for time series with a complex pattern. This work discussed two modifications of TLSAR (Two-Level Seasonal Autoregressive) modeling by considering the SSA decomposition results, namely TLSNN (Two-Level Seasonal Neural Network) and TLCSNN (Two-Level Complex Seasonal Neural Network). TLSAR consisted of a linear trend, harmonic, and autoregressive component. In contrast, the two proposed hybrid approaches consisted of flexible trend function, harmonic, and neural networks. Trend and harmonic function were considered as the deterministic part identified based on SSA decomposition. Meanwhile, NN was intended to handle the nonlinearity relationship in the stochastic part. These two SSA-based hybrid models were contemplated to be more flexible than TLSAR and more applicable to the series with an intricate pattern. The experimental studies to the monthly accidental deaths in USA and daily electricity load Jawa-Bali showed that the proposed SSA-based hybrid model reduced RMSE for the testing data from that obtained by TLSAR model up to 95%.
MFBLP Method Forecast for Regional Load Demand SystemCSCJournals
Load forecast plays an important role in planning and operation of a power system. The accuracy of the forecast value is necessary for economically efficient operation and also for effective control. This paper describes a method of modified forward backward linear predictor (MFBLP) for solving the regional load demand of New South Wales (NSW), Australia. The method is designed and simulated based on the actual load data of New South Wales, Australia. The accuracy of discussed method is obtained and comparison with previous methods is also reported.
This document summarizes a new technique for demand forecasting called exponentially smoothed regression analysis. The technique uses regression models to separately estimate trend and multiplicative seasonality in time series data. It draws on the power of regression analysis while allowing estimates to be smoothed over time like exponential smoothing methods. The technique estimates seasonal factors, deseasonalizes the data, then estimates trend and base demand through regression. This allows proper separation of trend and seasonality effects compared to other methods. The technique is computationally efficient and easy to implement.
Forecasting of electric consumption in a semiconductor plant using time serie...Alexander Decker
This document summarizes a study that used time series methods to forecast electricity consumption in a semiconductor plant. The study analyzed 36 months of historical electricity consumption data from 2010-2012 to select the best forecasting model. Single exponential smoothing was found to have the lowest Mean Absolute Percentage Error (MAPE) of 5.60% and was determined to be the best forecasting method. The selected model will be used to forecast future electricity consumption for the plant.
SIAM-AG21-Topological Persistence Machine of Phase TransitionHa Phuong
Presentation at SIAM Conference on Applied Algebraic Geometry (AG21), Aug. 2021.
Abstract. The study of phase transitions using data-driven approaches is challenging, especially when little prior knowledge of the system is available. Topological data analysis is an emerging framework for characterizing the shape of data and has recently achieved success in detecting structural transitions in material science, such as the glass--liquid transition. However, data obtained from physical states may not have explicit shapes as structural materials. We thus propose a general framework, termed “topological persistence machine," to construct the shape of data from correlations in states so that we can subsequently decipher phase transitions via qualitative changes in the shape. Our framework enables an effective and unified approach in phase transition analysis without having prior knowledge about phases or requiring the investigation of the system with large size. We demonstrate the efficacy of the approach in terms of detecting the Berezinskii--Kosterlitz--Thouless phase transition in the classical XY model and quantum phase transitions in the transverse Ising and Bose--Hubbard models. Interestingly, while these phase transitions have proven to be notoriously difficult to analyze using traditional methods, they can be characterized through our framework without requiring prior knowledge of the phases. Our approach is thus expected to be widely applicable and will provide the prospective with practical interests in exploring the phases of experimental physical systems.
This document presents a new forecasting model that combines fuzzy time series and automatic clustering techniques to forecast gasoline prices in Vietnam. The model first uses an automatic clustering algorithm to divide historical gasoline price data into clusters with varying interval lengths. It then fuzzifies the data based on the new intervals to determine fuzzy logical relationships and forecasted values. The model is applied to a dataset of gasoline prices in Vietnam. Results show the proposed model achieves higher forecasting accuracy than a first-order fuzzy time series model.
This document presents an overview of independent component analysis (ICA). It begins with notation for factor analysis and ICA models, then discusses assumptions and challenges of ICA including rotation ambiguity and non-Gaussianity. Methods for estimating the ICA model including measuring non-Gaussianity via kurtosis, entropy, and negentropy are described. A direct approach to ICA called product density ICA is outlined along with its optimization algorithm. Finally, an example application to handwritten digit images is briefly discussed.
Hybrid model for forecasting space-time data with calendar variation effectsTELKOMNIKA JOURNAL
The aim of this research is to propose a new hybrid model, i.e. Generalized Space-Time
Autoregressive with Exogenous Variable and Neural Network (GSTARX-NN) model for forecasting
space-time data with calendar variation effect. GSTARX model represented as a linear component with
exogenous variable particularly an effect of calendar variation, such as Eid Fitr. Whereas, NN was a model
for handling a nonlinear component. There were two studies conducted in this research, i.e. simulation
studies and applications on monthly inflow and outflow currency data in Bank Indonesia at East Java
region. The simulation study showed that the hybrid GSTARX-NN model could capture well the data
patterns, i.e. trend, seasonal, calendar variation, and both linear and nonlinear noise series. Moreover,
based on RMSE at testing dataset, the results of application study on inflow and outflow data showed that
the hybrid GSTARX-NN models tend to give more accurate forecast than VARX and GSTARX models.
These results in line with the third M3 forecasting competition conclusion that stated hybrid or combining
models, in average, yielded better forecast than individual models.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
CCS2019-opological time-series analysis with delay-variant embeddingHa Phuong
Q. H. Tran and Y. Hasegawa, Topological time-series analysis with delay-variant embedding, Oral Presentation at Conference on Complex Systems, Singapore, Singapore, Oct. 2019.
Here are the steps to solve this problem using Galerkin's technique:
1. Write the weak form of the differential equation:
∫(AEδu - δu d2u/dx2 - aδux)dx = 0
2. Choose the trial function u(x) = a0 + a1x
3. Choose the weight function δu = 1, x, x2...
4. Substitute the trial function and weight functions into the weak form and integrate by parts.
5. Apply the essential boundary conditions to eliminate terms involving du/dx at boundaries.
6. Solve the resulting algebraic equations to determine the unknown coefficients a0 and a1
Composite Analysis of Phase Resolved Partial Discharge Patterns using Statist...IJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
International Journal of Modern Engineering Research (IJMER) covers all the fields of engineering and science: Electrical Engineering, Mechanical Engineering, Civil Engineering, Chemical Engineering, Computer Engineering, Agricultural Engineering, Aerospace Engineering, Thermodynamics, Structural Engineering, Control Engineering, Robotics, Mechatronics, Fluid Mechanics, Nanotechnology, Simulators, Web-based Learning, Remote Laboratories, Engineering Design Methods, Education Research, Students' Satisfaction and Motivation, Global Projects, and Assessment…. And many more.
The document describes a recursive algorithm for multi-step prediction with mixture models that have dynamic switching between components. It begins by introducing notations and reviewing individual models, including normal regression components and static/dynamic switching models. It then presents the mixture prediction algorithm, first for a static switching model by constructing a predictive distribution from weighted component predictions. For a dynamic switching model, it similarly takes point estimates from the previous time and substitutes them into components to make weighted averaged predictions over multiple steps. The algorithm is summarized as initializing component statistics and parameter estimates, then substituting previous estimates into components to obtain weighted mixture predictions for new data points.
The document discusses finite element analysis and provides information on various topics related to it. It begins by listing the three methods of engineering analysis as experimental, analytical, and numerical/approximate methods. It then defines key finite element concepts such as finite element, finite element analysis, common element types, nodes, discretization, and the three phases of finite element method. It also discusses structural and non-structural problems, common methods associated with finite element analysis such as force method and stiffness method, and why polynomials are commonly used for interpolation in finite element analysis.
This document proposes a new quantile-based fuzzy time series forecasting model. It begins by discussing how time series models have been used to predict things like enrollments, weather, accidents, and stock prices. It then provides background on quantile regression models and fuzzy time series forecasting. The proposed method develops a time variant quantile-based fuzzy time series forecasting approach based on predicting future data trends. The method converts statistical quantiles to fuzzy quantiles using membership functions and provides a fuzzy metric to calculate future values based on trend forecasts. The model is applied to TAIFEX forecasting and shows better performance than other models in terms of complexity and accuracy.
Analysis of convection diffusion problems at various peclet numbers using fin...Alexander Decker
This document summarizes research analyzing convection-diffusion problems at various Peclet numbers using finite volume and finite difference schemes. It introduces convection-diffusion equations, defines Peclet number, and describes how central differencing and upwind differencing schemes were used to discretize and solve sample convection-diffusion problems numerically. The results show that central differencing leads to inaccurate solutions at high Peclet numbers, while upwind differencing satisfies consistency criteria by accounting for flow direction.
Multiple Linear Regression Model with Two Parameter Doubly Truncated New Symm...theijes
This document presents a multiple linear regression model with errors that follow a two-parameter doubly truncated new symmetric distribution. The model extends traditional linear regression by allowing for non-Gaussian distributed errors that are finite and bounded. Properties of the doubly truncated new symmetric distribution are derived, including its probability density function and characteristic function. Maximum likelihood and ordinary least squares methods are used to estimate the model parameters. A simulation study compares the proposed model to existing models that assume Gaussian or new symmetric distributed errors.
The document discusses various methods for constructing confidence intervals for estimating multinomial proportions. It aims to analyze the propensity for aberrations (i.e. unrealistic bounds like negative values) in the interval estimates across different classical and Bayesian methods. Specifically, it provides the mathematical conditions under which each method may produce aberrant interval limits, such as zero-width intervals or bounds exceeding 0 and 1, especially for small sample counts. The document also develops an R program to facilitate computational implementation of the various methods for applied analysis of multinomial data.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
New extensions of Rayleigh distribution based on invertedWeibull and Weibull ...IJECEIAES
The Rayleigh distribution was proposed in the fields of acoustics and optics by lord Rayleigh. It has wide applications in communication theory, such as description of instantaneous peak power of received radio signals, i.e. study of vibrations and waves. It has also been used for modeling of wave propagation, radiation, synthetic aperture radar images, and lifetime data in engineering and clinical studies. This work proposes two new extensions of the Rayleigh distribution, namely the Rayleigh inverted-Weibull (RIW) and the Rayleigh Weibull (RW) distributions. Several fundamental properties are derived in this study, these include reliability and hazard functions, moments, quantile function, random number generation, skewness, and kurtosis. The maximum likelihood estimators for the model parameters of the two proposed models are also derived along with the asymptotic confidence intervals. Two real data sets in communication systems and clinical trials are analyzed to illustrate the concept of the proposed extensions. The results demonstrated that the proposed extensions showed better fitting than other extensions and competing models.
A Study on Performance Analysis of Different Prediction Techniques in Predict...IJRES Journal
Time series data is a series of statistical data that is related to a specific instant or a specific time period. Here, the measurements are recorded on a regular basis such as monthly, quarterly and yearly. Most of the researchers have used one of the prediction techniques in prediction of time series data. But, they have not tested all prediction techniques on same data set. They have not even compared the performance of different prediction techniques on the same data set. In this research work, some well known prediction techniques have been applied in the same time series data set. The average error and residual analysis have been done for each and every applied technique. One technique has been selected based on the minimum average error and residual analysis among the all applied techniques. The residual analysis comprises of absolute residual, maximum residual, median of absolute residual, mean of absolute residual and standard deviation. To finalize the algorithm, same procedure has been applied on different time series data sets. Finally, one technique has been selected which has been given minimum error and minimum value of residual analysis in most cases.
1) The document analyzes optimum parameters for a geometric multigrid method for solving a two-dimensional thermoelasticity problem and Laplace equation numerically.
2) It studies the effect of grid size, inner iterations, and number of grids on computational time.
3) The results are compared between the two problems, single-grid methods, and other literature to determine if coupling equations impacts multigrid performance.
Week 4 forecasting - time series - smoothing and decomposition - m.awaluddin.tMaling Senk
Forecasting - time series - smoothing and decomposition methods
Smoothing Method as Moving Averages and exponetial methods. The steps for decomposition methods and example of it. Case study for smothing methods in Single Exponential Smoothing, Double Exponential Smoothing and Triple Exponential Smoothing
This document discusses dimensional analysis and its applications. It begins by defining dimensional analysis as a method to simplify physical problems by reducing variables using dimensional homogeneity. It then covers:
(1) Dimensions and units of common physical quantities
(2) Buckingham's Pi theorem for performing formal dimensional analysis to reduce variables to dimensionless parameters
(3) Examples of applying dimensional analysis to problems involving pressure gradients, drag forces, and other fluid mechanics quantities.
Survey on Unsupervised Learning in DataminingIOSR Journals
This document summarizes unsupervised learning techniques in data mining. It discusses clustering methods like partitioning and hierarchical clustering. Partitioning methods include k-means clustering and density-based clustering. K-means aims to minimize variance within clusters. Density-based clustering finds clusters as areas of high density separated by low density. Hierarchical clustering is agglomerative or divisive, building clusters either bottom-up or top-down. Agglomerative clustering starts with each point as a cluster and merges the closest pairs.
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
This document summarizes a study on the effect of parameters of a geometric multigrid method on CPU time for solving one-dimensional problems related to heat transfer and fluid flow. The parameters studied include coarsening ratio of grids, number of inner iterations, number of grid levels, and tolerances. Finite difference methods were used to discretize partial differential equations for problems involving Poisson, advection-diffusion, and heat transfer equations. Comparisons were made between multigrid and single grid methods like Gauss-Seidel and TDMA. Results confirmed some literature findings and presented some new results on the effect of parameters on CPU time.
Comparing Speech Recognition Systems (Microsoft API, Google API And CMU Sphinx)IJERA Editor
The idea of this paper is to design a tool that will be used to test and compare commercial speech recognition systems, such as Microsoft Speech API and Google Speech API, with open-source speech recognition systems such as Sphinx-4. The best way to compare automatic speech recognition systems in different environments is by using some audio recordings that were selected from different sources and calculating the word error rate (WER). Although the WER of the three aforementioned systems were acceptable, it was observed that the Google API is superior
Defects, Root Causes in Casting Process and Their Remedies: ReviewIJERA Editor
Many industry aims to improve quality as well as productivity of manufacturing product. So need to number of process parameter to must controlled while casting process, so there are no of uncertainty and defects are face by organizations. In casting process industries are need to technical solution to minimize the uncertainty and defects. In this review paper to represent various casting defects and root causes for engine parts while casting process. Also provide preventive action to improve quality as well as productivity an industrial level.
Hybrid model for forecasting space-time data with calendar variation effectsTELKOMNIKA JOURNAL
The aim of this research is to propose a new hybrid model, i.e. Generalized Space-Time
Autoregressive with Exogenous Variable and Neural Network (GSTARX-NN) model for forecasting
space-time data with calendar variation effect. GSTARX model represented as a linear component with
exogenous variable particularly an effect of calendar variation, such as Eid Fitr. Whereas, NN was a model
for handling a nonlinear component. There were two studies conducted in this research, i.e. simulation
studies and applications on monthly inflow and outflow currency data in Bank Indonesia at East Java
region. The simulation study showed that the hybrid GSTARX-NN model could capture well the data
patterns, i.e. trend, seasonal, calendar variation, and both linear and nonlinear noise series. Moreover,
based on RMSE at testing dataset, the results of application study on inflow and outflow data showed that
the hybrid GSTARX-NN models tend to give more accurate forecast than VARX and GSTARX models.
These results in line with the third M3 forecasting competition conclusion that stated hybrid or combining
models, in average, yielded better forecast than individual models.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
CCS2019-opological time-series analysis with delay-variant embeddingHa Phuong
Q. H. Tran and Y. Hasegawa, Topological time-series analysis with delay-variant embedding, Oral Presentation at Conference on Complex Systems, Singapore, Singapore, Oct. 2019.
Here are the steps to solve this problem using Galerkin's technique:
1. Write the weak form of the differential equation:
∫(AEδu - δu d2u/dx2 - aδux)dx = 0
2. Choose the trial function u(x) = a0 + a1x
3. Choose the weight function δu = 1, x, x2...
4. Substitute the trial function and weight functions into the weak form and integrate by parts.
5. Apply the essential boundary conditions to eliminate terms involving du/dx at boundaries.
6. Solve the resulting algebraic equations to determine the unknown coefficients a0 and a1
Composite Analysis of Phase Resolved Partial Discharge Patterns using Statist...IJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
International Journal of Modern Engineering Research (IJMER) covers all the fields of engineering and science: Electrical Engineering, Mechanical Engineering, Civil Engineering, Chemical Engineering, Computer Engineering, Agricultural Engineering, Aerospace Engineering, Thermodynamics, Structural Engineering, Control Engineering, Robotics, Mechatronics, Fluid Mechanics, Nanotechnology, Simulators, Web-based Learning, Remote Laboratories, Engineering Design Methods, Education Research, Students' Satisfaction and Motivation, Global Projects, and Assessment…. And many more.
The document describes a recursive algorithm for multi-step prediction with mixture models that have dynamic switching between components. It begins by introducing notations and reviewing individual models, including normal regression components and static/dynamic switching models. It then presents the mixture prediction algorithm, first for a static switching model by constructing a predictive distribution from weighted component predictions. For a dynamic switching model, it similarly takes point estimates from the previous time and substitutes them into components to make weighted averaged predictions over multiple steps. The algorithm is summarized as initializing component statistics and parameter estimates, then substituting previous estimates into components to obtain weighted mixture predictions for new data points.
The document discusses finite element analysis and provides information on various topics related to it. It begins by listing the three methods of engineering analysis as experimental, analytical, and numerical/approximate methods. It then defines key finite element concepts such as finite element, finite element analysis, common element types, nodes, discretization, and the three phases of finite element method. It also discusses structural and non-structural problems, common methods associated with finite element analysis such as force method and stiffness method, and why polynomials are commonly used for interpolation in finite element analysis.
This document proposes a new quantile-based fuzzy time series forecasting model. It begins by discussing how time series models have been used to predict things like enrollments, weather, accidents, and stock prices. It then provides background on quantile regression models and fuzzy time series forecasting. The proposed method develops a time variant quantile-based fuzzy time series forecasting approach based on predicting future data trends. The method converts statistical quantiles to fuzzy quantiles using membership functions and provides a fuzzy metric to calculate future values based on trend forecasts. The model is applied to TAIFEX forecasting and shows better performance than other models in terms of complexity and accuracy.
Analysis of convection diffusion problems at various peclet numbers using fin...Alexander Decker
This document summarizes research analyzing convection-diffusion problems at various Peclet numbers using finite volume and finite difference schemes. It introduces convection-diffusion equations, defines Peclet number, and describes how central differencing and upwind differencing schemes were used to discretize and solve sample convection-diffusion problems numerically. The results show that central differencing leads to inaccurate solutions at high Peclet numbers, while upwind differencing satisfies consistency criteria by accounting for flow direction.
Multiple Linear Regression Model with Two Parameter Doubly Truncated New Symm...theijes
This document presents a multiple linear regression model with errors that follow a two-parameter doubly truncated new symmetric distribution. The model extends traditional linear regression by allowing for non-Gaussian distributed errors that are finite and bounded. Properties of the doubly truncated new symmetric distribution are derived, including its probability density function and characteristic function. Maximum likelihood and ordinary least squares methods are used to estimate the model parameters. A simulation study compares the proposed model to existing models that assume Gaussian or new symmetric distributed errors.
The document discusses various methods for constructing confidence intervals for estimating multinomial proportions. It aims to analyze the propensity for aberrations (i.e. unrealistic bounds like negative values) in the interval estimates across different classical and Bayesian methods. Specifically, it provides the mathematical conditions under which each method may produce aberrant interval limits, such as zero-width intervals or bounds exceeding 0 and 1, especially for small sample counts. The document also develops an R program to facilitate computational implementation of the various methods for applied analysis of multinomial data.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
New extensions of Rayleigh distribution based on invertedWeibull and Weibull ...IJECEIAES
The Rayleigh distribution was proposed in the fields of acoustics and optics by lord Rayleigh. It has wide applications in communication theory, such as description of instantaneous peak power of received radio signals, i.e. study of vibrations and waves. It has also been used for modeling of wave propagation, radiation, synthetic aperture radar images, and lifetime data in engineering and clinical studies. This work proposes two new extensions of the Rayleigh distribution, namely the Rayleigh inverted-Weibull (RIW) and the Rayleigh Weibull (RW) distributions. Several fundamental properties are derived in this study, these include reliability and hazard functions, moments, quantile function, random number generation, skewness, and kurtosis. The maximum likelihood estimators for the model parameters of the two proposed models are also derived along with the asymptotic confidence intervals. Two real data sets in communication systems and clinical trials are analyzed to illustrate the concept of the proposed extensions. The results demonstrated that the proposed extensions showed better fitting than other extensions and competing models.
A Study on Performance Analysis of Different Prediction Techniques in Predict...IJRES Journal
Time series data is a series of statistical data that is related to a specific instant or a specific time period. Here, the measurements are recorded on a regular basis such as monthly, quarterly and yearly. Most of the researchers have used one of the prediction techniques in prediction of time series data. But, they have not tested all prediction techniques on same data set. They have not even compared the performance of different prediction techniques on the same data set. In this research work, some well known prediction techniques have been applied in the same time series data set. The average error and residual analysis have been done for each and every applied technique. One technique has been selected based on the minimum average error and residual analysis among the all applied techniques. The residual analysis comprises of absolute residual, maximum residual, median of absolute residual, mean of absolute residual and standard deviation. To finalize the algorithm, same procedure has been applied on different time series data sets. Finally, one technique has been selected which has been given minimum error and minimum value of residual analysis in most cases.
1) The document analyzes optimum parameters for a geometric multigrid method for solving a two-dimensional thermoelasticity problem and Laplace equation numerically.
2) It studies the effect of grid size, inner iterations, and number of grids on computational time.
3) The results are compared between the two problems, single-grid methods, and other literature to determine if coupling equations impacts multigrid performance.
Week 4 forecasting - time series - smoothing and decomposition - m.awaluddin.tMaling Senk
Forecasting - time series - smoothing and decomposition methods
Smoothing Method as Moving Averages and exponetial methods. The steps for decomposition methods and example of it. Case study for smothing methods in Single Exponential Smoothing, Double Exponential Smoothing and Triple Exponential Smoothing
This document discusses dimensional analysis and its applications. It begins by defining dimensional analysis as a method to simplify physical problems by reducing variables using dimensional homogeneity. It then covers:
(1) Dimensions and units of common physical quantities
(2) Buckingham's Pi theorem for performing formal dimensional analysis to reduce variables to dimensionless parameters
(3) Examples of applying dimensional analysis to problems involving pressure gradients, drag forces, and other fluid mechanics quantities.
Survey on Unsupervised Learning in DataminingIOSR Journals
This document summarizes unsupervised learning techniques in data mining. It discusses clustering methods like partitioning and hierarchical clustering. Partitioning methods include k-means clustering and density-based clustering. K-means aims to minimize variance within clusters. Density-based clustering finds clusters as areas of high density separated by low density. Hierarchical clustering is agglomerative or divisive, building clusters either bottom-up or top-down. Agglomerative clustering starts with each point as a cluster and merges the closest pairs.
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
This document summarizes a study on the effect of parameters of a geometric multigrid method on CPU time for solving one-dimensional problems related to heat transfer and fluid flow. The parameters studied include coarsening ratio of grids, number of inner iterations, number of grid levels, and tolerances. Finite difference methods were used to discretize partial differential equations for problems involving Poisson, advection-diffusion, and heat transfer equations. Comparisons were made between multigrid and single grid methods like Gauss-Seidel and TDMA. Results confirmed some literature findings and presented some new results on the effect of parameters on CPU time.
Comparing Speech Recognition Systems (Microsoft API, Google API And CMU Sphinx)IJERA Editor
The idea of this paper is to design a tool that will be used to test and compare commercial speech recognition systems, such as Microsoft Speech API and Google Speech API, with open-source speech recognition systems such as Sphinx-4. The best way to compare automatic speech recognition systems in different environments is by using some audio recordings that were selected from different sources and calculating the word error rate (WER). Although the WER of the three aforementioned systems were acceptable, it was observed that the Google API is superior
Defects, Root Causes in Casting Process and Their Remedies: ReviewIJERA Editor
Many industry aims to improve quality as well as productivity of manufacturing product. So need to number of process parameter to must controlled while casting process, so there are no of uncertainty and defects are face by organizations. In casting process industries are need to technical solution to minimize the uncertainty and defects. In this review paper to represent various casting defects and root causes for engine parts while casting process. Also provide preventive action to improve quality as well as productivity an industrial level.
Brainstorming: Thinking - Problem Solving StrategyIJERA Editor
Brainstorming is a popular tool that helps you generate creative answers to a problem. It is mainly useful when you want to break out of stale, established patterns of thinking, so that you can develop new ways of looking at things. The aforementioned also helps you overcome many of the concerns that can make collection problemsolving a sterile and substandard process. Though group brainstorming is often more effective at generating ideas than normal group problem-solving, study after study has revealed that when individuals brainstorm on their own, they come up with more ideas and often better quality ideas than groups of people who brainstorm together.
Duplex 2209 Weld Overlay by ESSC ProcessIJERA Editor
In the modern world of industrialization the wear is eating metal assets worth millions of dollars per year. The wear is in the form of corrosion, erosion, abrasion etc. which occur in the process industries like oil & gas, refineries, cement plants, steel plants, shipping and offshore working structures. The equipments like pressure vessels, heat exchangers, hydro processing reactors which very often work at elevated temperatures face corrosion in the internal diameter. Duplex 2209 weld overlay on ferrous material is developed for high corrosion resistance properties and having high productivity by Electroslag strip cladding process due to its less dilution ~10% as compared to SMAW , GTAW or FCAW process. Because of Low Dilution ~10% undiluted chemistry can be achieved with single layer as compared to other weld overlay processes. The facility was developed inhouse to carry out weld overlay by ESSC and Testing.
In this paper, we introduce intense subgraphs and feeble subgraphs based on their densities and discuss mild balanced IFG and equally balanced intuitionistic fuzzy subgraphs and their properties. The operations “sum” and “union” of subgraphs of Intuitionistic Fuzzy Graphs (IFG) are analyzed. As an application of equally balanced IF subgraphs, curriculum and syllabus formation in higher educational system is discussed. Mathematics Subject Classification: 05C72, 03E72, 03F55.
“Design and Analysis of a Windmill Blade in Windmill Electric Generation System”IJERA Editor
Wind turbine is a standout amongst the most imperative wellsprings of renewable vitality. Wind turbine extricate active vitality from the wind. A little wind turbine cutting edge was composed and examined in this work. The power execution of little flat hub wind turbines was mimicked in detail utilizing altered blade element momentum methods (BEM). Another sharp edge was planned utilizing diverse assault points (i.e.0o , 5o , 10o ), distinctive speed (4m/s, 5m/s and 12m/s) and rotor span (0.5m and 1m). From this we discover harmony length and power yield hypothetically. Likewise, we chose material for proposed sharp edge.
Empirical Study of a Key Authentication Scheme in Public Key CryptographyIJERA Editor
Public key cryptosystem plays major role in many online business applications. In public key cryptosystem, public key need not be protected for confidentiality, but the authenticity of public key is needed. Earlier, many key authentication schemes are developed based on discrete logarithms. Each scheme has its own drawbacks. We developed a secure key authentication scheme based on discrete logarithms to avoid the drawbacks of earlier schemes. In this paper, we illustrate the empirical study to show the experimental proof of our scheme.
A Proposed Method for Safe Disposal of Consumed Photovoltaic ModulesIJERA Editor
The growth of domestic and large-scale applications of solar energy, especially photovoltaic (PV) cells which reaches annually up to 40 % worldwide since 2000, means that the technology has stepped out from demonstration phase to large-scale deployment. Several countries have started to exploit this huge potential as part of their future energy supply. Photovoltaic cells are manufactured from various semiconductors; materials that are moderately good conductors for electricity but harmful to the environment. End-of-life disposal of PV modules can be an environmental issue. However, due to the long lifespan of PV modules (25 to 30 years), currently most PV modules have not reached the disposal stage. As a result, there is very little experience and knowledge with the disposal and/or recycling techniques of PV modules. This paper proposes a method for safe disposal of solar panels after the end of their life by burying the PV cells into concrete blocks that may be used in different civil applications. Two types of PV cells (mono-crystalline & multi-crystalline) are selected to be mixed with concrete components to investigate their effect on properties of concrete. The experimental results showed that the PV cells have an effect on the concrete properties. Reduction of concrete compressive strength and density, while an increase in the concrete porosity were observed. In General, this study showed the validity of the proposed method to be further investigated for safe disposal of consumed photovoltaic modules
POWER CONSUMING SYSTEM USING WSN IN HEMSIJERA Editor
Today’s buildings account for a large fraction of our energy consumption. In an effort to economize scarce fossil fuels on earth, sensor networks are a valuable tool to increase the energy efficiency of buildings without severely reducing our quality of life. Within a smart building many sensors and actuators are interconnected to form a control system. Nowadays, the deployment of a building control system is complicated because of different communication standards. Here we present a web services-based approach to integrate resource constrained sensor and actuator nodes into IP-based networks. A key feature of our approach is its capability for automatic service discovery. The design and development of an intelligent monitoring and controlling system for home appliances in a real time system is reported in this paper. This system principally monitors the electrical parameters such as voltage and current and subsequently calculates the power consumption of the home appliances that are need to be monitored. The innovation of this system is controlling mechanism implementation in so many ways. Also the proposed system is an economical and easily operable. Due to these intelligent characteristics it become an electricity expense reducer and people friendly.
Numerical Model and Experimental Validation of the Hydrodynamics in an Indust...IJERA Editor
This paper describes a development of a numerical model and experimental validation of the hydrodynamics in industrial-scale sewage sludge bubbling fluidized bed incinerator. The numerical model and simulations are performed using commercial CFD software package ANSYS Fluent 14.5. The complex geometry of the developed numerical model represents the actual industrial-scale bubbling fluidized bed combustor. The gassolid flow behaviour inside the bed was described using the Eulerian-Eulerian multiphase model. The momentum exchange coefficients between the gas phase and solid particles were described by the Syamlal and O’Brien drag model equations. The CFD transient simulations were run for 350 seconds at the optimum operating conditions of the used fluidized bed with bed temperature of 850°C. The experiments were carried out using quartz sand with three different particle sizes having a diameters ranging from 0.5 mm to 1.5 mm and a density of 2650 kg/m³. The industrial-scale furnace was filled with bed material to a bed height of 0.85 m. The same operating parameters have been applied for both experimental and numerical studies. The hydrodynamics of the gas-solid industrial-scale bubbling fluidized bed at operating conditions are investigated in the CFD numerical model and simulations of this three-dimensional (3D) complex geometry. To estimate the prediction quality of the simulations based on the developed numerical model, the minimum fluidization gas velocity and pressure drop results obtained from the CFD simulations are validated with the experimental measurements. The generated simulation results of the pressure drop and minimum fluidization gas velocity of the industrial-scale sewage sludge incinerator based the Eulerian-Eulerian method and Syamlal and O’Brien drag model are in good agreement with the experimental measured data.
Phyto cover for Sanitary Landfill Sites: A brief reviewIJERA Editor
Landfill gases (LFG) are produced due to biodegradation of organic fraction of municipal solid waste (MSW) when water comes in contact with buried wastes. The conventional clay cover is still practiced to mitigate the percolation of water in landfills in India. Gas extraction systems in landfill for gas collection are used but are much expensive. Thus, “Phytocapping” technique can be one of the alternatives to mitigate landfill gases and to minimize percolation of water into the landfill. Indian plants with locally available soil and municipal solid waste can be tested for the purpose of methane mitigation, heavy metals remediation from leachate. Methane oxidation due to vegetation can be observed compared to non-vegetated landfill. Root zone methane concentrations can be monitored for the plant species
Topology Management for Mobile Ad Hoc Networks ScenarioIJERA Editor
Cooperative communication is the main accessing point in present days. These results can be accessed through proactive protocol like route request packet sending and route request packet receiving. The main issue is how communication will be done in MANETS. Mobile Ad-hoc networks are self-configurable networks; each node behaves like server and client in MANET. COCO (Capacity Optimized Cooperative Communication) model was developed for accessing these types of resources in MANETs. This model can’t provide sufficient communication or overall network performance. This model provides sufficient capability improvement in mobile ad-hoc networks, but this model will be taking more power resources for doing this work. exploitation simulation examples, we have a tendency to show that physical layer cooperative communications have important impacts on the performance of topology control and network capability, and also the proposed topology management scheme will considerably improve the network capability in MANETs with cooperative communications
Direction of Arrival Estimation Based on MUSIC Algorithm Using Uniform and No...IJERA Editor
In signal processing, the direction of arrival (DOA) estimation denotes the direction from which a propagating wave arrives at a point, where a set of antennas is located. Using the array antenna has an advantage over the single antenna in achieving an improved performance by applying Multiple Signal Classification (MUSIC) algorithm. This paper focuses on estimating the DOA using uniform linear array (ULA) and non-uniform linear array (NLA)of antennas to analyze the performance factors that affect the accuracy and resolution of the system based on MUSIC algorithm. The direction of arrival estimation is simulated on a MATLAB platform with a set of input parameters such as array elements, signal to noise ratio, number of snapshots and number of signal sources. An extensive simulation has been conducted and the results show that the NLA with DOA estimation for co-prime array can achieve an accurate and efficient DOA estimation
Study On The External Gas-Assisted Mold Temperature Control For Thin Wall Inj...IJERA Editor
Dynamic mold surface temperature control (DMTC) has many advantages in micro-injection molding as well as thin-wall molding product. In this paper, DMTC will be applied for the thin-wall molding part with the observation of the weldline appearance and the weldline strength. The heating step of DMTC will be achieved by the hot air flow directly to the weldline area. The results show that the heating rate could be reached to 4.5C/s, which could raising the mold surface from 30C to over 120C within 15 s. The melt filling was operated with high temperature at the weldline area; therefore, the weldline appearance was eliminated. In addition, the weldline strength was also improved. The results show that the thinner part had the higher strength of the weldline.
Analysis of Emission from Petrol Vehicles in the Koforidua Municipality, GhanaIJERA Editor
Koforidua has seen its fair share in the increase in the number of cars on its roads over the past decade. This has resulted in progressive increase in traffic congestion on the roads and could lead to deterioration in the air quality. Exhaust gas emissions from a total of 104 vehicles were tested with an exhaust gas analyzer. Hydrocarbons (HC), Carbon dioxide (CO2) and Carbon monoxide (CO) were measured and compared with EU standards for gasoline vehicles and Auto Data Technical information. A series of algorithms developed using Microsoft Excel Spread Sheet were used to analyze the data collected. Out of the total number of cars tested, 74 and 80 cars passed the HC and CO tests respectively. 10 cars out of the total were rated as good under CO2 test. In total, 69.5% of the cars tested passed the various tests conducted and about 73 cars representing 70.2% of the cars tested were over 10 years and the emission standards for those years were flexible.
Some common Fixed Point Theorems for compatible - contractions in G-metric ...IJERA Editor
We prove some common fixed point theorems for compatible self mappings satisfying some kind of contractive type conditions on complete G-metric spaces and obtain results of Kumara Swamy and Phaneendra[6] and Sushanta Kumar Mohanta[16] as corollaries. 2010 Mathematics Subject Classification.47H10, 54H25
Performance Evaluation of Two-Level Photovoltaic Voltage Source Inverter Cons...IJERA Editor
The switching control schemesincluding sinusoidal pulsewidth modulation (SPWM) and space vector modulation (SVM) are very important for the efficiency and accuracyof the voltage source inverter (VSI). Therefore, this paper presents a performance evaluation of a two-level VSI for the photovoltaic (PV) system based on adopted switching controllers namely, SPWM and SVM switching methods. The evaluation procedure and accuracy are demonstrated and investigated using simulations conducted for a 1.5 kW inverter in a MATLAB/Simulink environment. Two types of loads are utilized to assess the performance of the VSI which are resistive (R) load and resistive and inductive (RL) load. Total harmonic distortion (THD) is used for the comparison of the SPWM and the SVM. Results show that the SVM performs better compared with the SPWM in terms of THD rate. The THDs for SVM based system are found to be 0.02% and 0.08% for the R and RL, respectively; whereas the THDsfor SPWM controller are found to be 0.43% and 0.51% for the R and RL, respectively. Furthermore, mean square error (MSE) is also consideredas a statistical indicator. The MSE indicates that the SVM switching controller technique have superior outcomes compared with the SPWM switching controller technique and thus increases the efficiency of the whole system
Evaluation of Anti-oxidant Activity of Elytraria acaulis Aerial ExtractsIJERA Editor
Elytraria acaulis, a stem less perennial herb of Acantheceae family has many medicinal and therapeutic properties. Anti oxidative activity of the aerial parts of this Elytraria acaulis were assessed in the present study. The aerial parts of the plant (Stem & Leaves) were extracted in different organic solvents such as n-Hexane, Ethanol, Methanol, Ethyl Acetate and Chloroform. Initially, Total Phenolic & Total Flavonoids content in different solvent plant extracts were estimated. The free radical scavenging and antioxidant activity of the Elytraria acaulis aerial extracts in different organic solvents were also assayed by DPPH assay, FRAP assay. The aerial extracts of Elytraria acaulis have shown significant anti oxidant activity. Hence, further studies on this plant will enable elucidation of its therapeutic properties and medicinal applications
Design of Low Power Vedic Multiplier Based on Reversible LogicIJERA Editor
This document describes a proposed design for an 8-bit low power Vedic multiplier based on reversible logic. It begins with background on reversible logic and how it can reduce power dissipation compared to irreversible logic. It then discusses the Vedic multiplication algorithm Urdhva Tiryakbhyam Sutra and how it can generate partial products and sums in a single step, reducing the number of adders needed compared to other multipliers. The proposed 8-bit multiplier design is described as using four 4-bit Vedic multiplier blocks and three 8-bit ripple carry adders built from reversible HNG gates. Simulation results showing reduced power, area and delay are discussed.
Locating Facts Devices in Optimized manner in Power System by Means of Sensit...IJERA Editor
This document summarizes a research paper that presents a new method for optimally locating Flexible AC Transmission System (FACTS) devices like Static Var Compensators (SVCs) and Thyristor Controlled Series Capacitors (TCSCs) in a power system network. The method uses sensitivity analysis to determine the optimal location and sizing of FACTS devices. It calculates voltage-reactive power sensitivity indices for each bus and line to determine which buses and lines are most sensitive to changes in reactive power. FACTS devices are then optimally located at the bus or line with the highest positive or negative sensitivity index, depending on whether a SVC or TCSC is being placed. The method is tested on the IEEE
Efficiency of recurrent neural networks for seasonal trended time series mode...IJECEIAES
Seasonal time series with trends are the most common data sets used in forecasting. This work focuses on the automatic processing of a
non-pre-processed time series by studying the efficiency of recurrent neural networks (RNN), in particular both long short-term memory (LSTM), and bidirectional long short-term memory (Bi-LSTM) extensions, for modelling seasonal time series with trend. For this purpose, we are interested in the learning stability of the established systems using the mean average percentage error (MAPE) as a measure. Both simulated and real data were examined, and we have found a positive correlation between the signal period and the system input vector length for stable and relatively efficient learning. We also examined the white noise impact on the learning performance.
Geoid height determination is one of the major problems of geodesy because usage of satellite
techniques in geodesy isgetting increasing. Geoid heights can be determined using different methods according
to the available data. Soft computing methods such as Fuzzy logic and neural networks became so popular that
they are used to solve many engineering problems. Fuzzy logic theory and later developments in uncertainty
assessment have enabled us to develop more precise models for our requirements. In this study, How to
construct the best fuzzy model is examined. For this purpose, three different data sets were taken and two
different kinds (two inpust one output and three inputs one output) fuzzy model were formed for the calculation
of geoid heights in Istanbul (Turkey). The Fuzzy models results of these were compared with geoid heights
obtained by GPS/levelling methods. The fuzzy approximation models were tested on the test points.
Shunt Faults Detection on Transmission Line by Waveletpaperpublications3
Abstract: Transmission line fault detection is a very important task because major portion of power system fault occurring in transmission system. This paper represents a fast and reliable method of transmission line shunt fault detection. MATLAB Simulink use for modeled an IEEE 9-bus test power system for case study of various faults. In proposed work Daubechies wavelet is applied for decomposition of fault transients. The application of wavelet analysis helps in accurate classification of the various fault patterns. Wavelet entropy measure based on wavelet analysis is able to observe the unsteady signals and complexity of the system at time-frequency plane.
The result shows that the proposed method is capable to detect all the shunt faults.
A COMPREHENSIVE ANALYSIS OF QUANTUM CLUSTERING : FINDING ALL THE POTENTIAL MI...IJDKP
Quantum clustering (QC), is a data clustering algorithm based on quantum mechanics which is
accomplished by substituting each point in a given dataset with a Gaussian. The width of the Gaussian is a
σ value, a hyper-parameter which can be manually defined and manipulated to suit the application.
Numerical methods are used to find all the minima of the quantum potential as they correspond to cluster
centers. Herein, we investigate the mathematical task of expressing and finding all the roots of the
exponential polynomial corresponding to the minima of a two-dimensional quantum potential. This is an
outstanding task because normally such expressions are impossible to solve analytically. However, we
prove that if the points are all included in a square region of size σ, there is only one minimum. This bound
is not only useful in the number of solutions to look for, by numerical means, it allows to to propose a new
numerical approach “per block”. This technique decreases the number of particles by approximating some
groups of particles to weighted particles. These findings are not only useful to the quantum clustering
problem but also for the exponential polynomials encountered in quantum chemistry, Solid-state Physics
and other applications.
On Selection of Periodic Kernels Parameters in Time Series Prediction cscpconf
In the paper the analysis of the periodic kernels parameters is described. Periodic kernels can
be used for the prediction task, performed as the typical regression problem. On the basis of the
Periodic Kernel Estimator (PerKE) the prediction of real time series is performed. As periodic
kernels require the setting of their parameters it is necessary to analyse their influence on the
prediction quality. This paper describes an easy methodology of finding values of parameters of
periodic kernels. It is based on grid search. Two different error measures are taken into
consideration as the prediction qualities but lead to comparable results. The methodology was
tested on benchmark and real datasets and proved to give satisfactory results.
ON SELECTION OF PERIODIC KERNELS PARAMETERS IN TIME SERIES PREDICTIONcscpconf
In the paper the analysis of the periodic kernels parameters is described. Periodic kernels can
be used for the prediction task, performed as the typical regression problem. On the basis of the
Periodic Kernel Estimator (PerKE) the prediction of real time series is performed. As periodic
kernels require the setting of their parameters it is necessary to analyse their influence on the
prediction quality. This paper describes an easy methodology of finding values of parameters of
periodic kernels. It is based on grid search. Two different error measures are taken into
consideration as the prediction qualities but lead to comparable results. The methodology was
tested on benchmark and real datasets and proved to give satisfactory results.
Spatio-Temporal Characterization with Wavelet Coherence: Anexus between Envir...ijsc
Identifying spatio-temporal synchrony in a complex, interacting and oscillatory coupled-system is a challenge. In particular, the characterization of statistical relationships between environmental or biophysical variables with the multivariate data of pandemic is a difficult process because of the intrinsic variability and non-stationary nature of the time-series in space and time. This paper presents a methodology to address these issues by examining the bivariate relationship between Covid-19 and temperature time-series in the time-localized frequency domain by using Singular Value Decomposition (SVD) and continuous cross-wavelet analysis. First, the dominant spatio-temporal trends are derived by using the eigen decomposition of SVD. The Covid-19 incidence data and the temperature data of the corresponding period are transformed into significant eigen-state vectors for each spatial unit. The Morlet Wavelet transformation is performed to analyse and compare the frequency structure of the dominant trends derived by the SVD. The result provides cross-wavelet transform and wavelet coherence measures in the ranges of time period for the corresponding spatial units. Additionally, wavelet power spectrum and paired wavelet coherence statistics and phase difference are estimated. The result suggests statistically significant coherency at various frequencies providing insight into spatio-temporal dynamics. Moreover, it provides information about the complex conjugate dynamic relationships in terms phases and phase
differences.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Ill-posedness formulation of the emission source localization in the radio- d...Ahmed Ammar Rebai PhD
To contact the authors : tarek.salhi@gmail.com and ahmed.rebai2@gmail.com
In the field of radio detection in astroparticle physics, many studies have shown the strong dependence of the solution of the radio-transient sources localization problem (the radio-shower time of arrival on antennas) such solutions are purely numerical artifacts. Based on a detailed analysis of some already published results of radio-detection experiments like : CODALEMA 3 in France, AERA in Argentina and TREND in China, we demonstrate the ill-posed character of this problem in the sens of Hadamard. Two approaches have been used as the existence of solutions degeneration and the bad conditioning of the mathematical formulation problem. A comparison between experimental results and simulations have been made, to highlight the mathematical studies. Many properties of the non-linear least square function are discussed such as the configuration of the set of solutions and the bias.
Gravitational search algorithm with chaotic map (gsa cm) for solving optimiza...eSAT Journals
Abstract
Gravitational Search Algorithm (GSA) is a newly heuristic algorithm inspired by nature which utilizes Newtonian gravity law and
mass interactions. It has captured much attention since it has provided higher performance in solving various optimization
problems. This study hybridizes the GSA and chaotic equations. Ten chaotic-based GSA (GSA-CM) methods, which define the
random selections by different chaotic maps, have been developed. The proposed methods have been applied to the minimization
of benchmark problems and the results have been compared. The obtained numeric results show that most of the proposed
algorithms have increased the performance of GSA and have developed its quality of solution.
Keywords: Computational Intelligence, Evolutionary Computation, Heuristic Algorithms, Chaotic Maps, Optimization
Methods.
SEQUENTIAL CLUSTERING-BASED EVENT DETECTION FOR NONINTRUSIVE LOAD MONITORINGcscpconf
The problem of change-point detection has been well studied and adopted in many signal processing applications. In such applications, the informative segments of the signal are the stationary ones before and after the change-point. However, for some novel signal processing and machine learning applications such as Non-Intrusive Load Monitoring (NILM), the information contained in the non-stationary transient intervals is of equal or even more importance to the recognition process. In this paper, we introduce a novel clustering-based sequential detection of abrupt changes in an aggregate electricity consumption profile with the accurate decomposition of the input signal into stationary and non-stationary segments. We also introduce various event models in the context of clustering analysis. The proposed algorithm is applied to building-level energy profiles with promising results for the residential BLUED power dataset.
SEQUENTIAL CLUSTERING-BASED EVENT DETECTION FOR NONINTRUSIVE LOAD MONITORINGcsandit
The problem of change-point detection has been well studied and adopted in many signal processing applications. In such applications, the informative segments of the signal are the
stationary ones before and after the change-point. However, for some novel signal processing and machine learning applications such as Non-Intrusive Load Monitoring (NILM), the information contained in the non-stationary transient intervals is of equal or even more importance to the recognition process. In this paper, we introduce a novel clustering-based sequential detection of abrupt changes in an aggregate electricity consumption profile with
accurate decomposition of the input signal into stationary and non-stationary segments. We also introduce various event models in the context of clustering analysis. The proposed algorithm is applied to building-level energy profiles with promising results for the residential BLUED power dataset.
Comparison of Different Methods for Fusion of Multimodal Medical ImagesIRJET Journal
This document compares different methods for fusing multimodal medical images, including PCA, DCT, SWT, and DWT. It provides an overview of each method, including formulations, process flow diagrams, algorithms, and advantages/disadvantages. PCA uses eigenvectors to reveal internal data structure and remove redundancy. DCT expresses image blocks as sums of cosine functions. SWT is a translation-invariant modification of DWT that does not decimate coefficients. DWT decomposes images into coarse and detailed frequency subbands using wavelet transforms. The document reviews each method for fusing medical images from different modalities to extract complementary information.
The document discusses various load forecasting methods used in power systems, including:
1) Exponential smoothing techniques like linear, exponential, and polynomial regression to model load growth over time.
2) Land use simulation to map existing and planned development to forecast load growth.
3) Box-Jenkins methodology using autoregressive and moving average processes to model load patterns for short-term forecasting.
A Novel Approach to Analyze Satellite Images for Severe Weather EventsIJERA Editor
Severe weather events (e.g. cyclones, thunderstorms, fog, floods etc.) have catastrophic effects on human life and aggrandize the impacts on agriculture, economy etc. Therefore information related to severe weather, in any form, is important for mitigation purposes and helps in delineating the causes and influences responsible. Satellites images provide such information related to the formation of such events at early stages and help in tracking its spatial and temporal development. This paper presents a framework to obtain maximum information of severe weather events from satellite panoramas. The backbone of this proposed framework involves high quality filtering of noise followed by a region growing technique which seeds each pixel, thus providing refined information for further extraction. This approach will be very helpful not only for acquiring valuable information of developing severe weather events, but also for other fields (e.g. medical and astronomy) where high precision imaging is required
Undetermined Mixing Matrix Estimation Base on Classification and CountingIJRESJOURNAL
ABSTRACT: This paper introduces the mixing matrix estimation algorithms about undetermined blind source separation. Contrapose the difficulty to determine the parameters, the complex calculations in potential function method and the difficulty to confirm the cluster centers in method of clustering, we propose a new method to estimate the mixing matrix base on classification and counting the observed data. The experiment result shows that the new algorithm can not only simplify the calculation but also is easier to be understood. Besides, the new algorithm can provide a more accurate result according to the precision people wanted.
A Combination of Wavelet Artificial Neural Networks Integrated with Bootstrap...IJERA Editor
In this paper, an iterative forecasting methodology for time series prediction that integrates wavelet de-noising
and decomposition with an Artificial Neural Network (ANN) and Bootstrap methods is put forward here.
Basically, a given time series to be forecasted is initially decomposed into trend and noise (wavelet) components
by using a wavelet de-noising algorithm. Both trend and noise components are then further decomposed by
means of a wavelet decomposition method producing orthonormal Wavelet Components (WCs) for each one.
Each WC is separately modelled through an ANN in order to provide both in-sample and out-of-sample
forecasts. At each time t, the respective forecasts of the WCs of the trend and noise components are simply
added to produce the in-sample and out-of-sample forecasts of the underlying time series. Finally, out-of-sample
predictive densities are empirically simulated by the Bootstrap sampler and the confidence intervals are then
yielded, considering some level of credibility. The proposed methodology, when applied to the well-known
Canadian lynx data that exhibit non-linearity and non-Gaussian properties, has outperformed other methods
traditionally used to forecast it.
Wavelet neural network conjunction model in flow forecasting of subhimalayan ...iaemedu
This document summarizes a study that uses a wavelet-neural network (WLNN) conjunction model for river flow forecasting of the Brahmaputra River in India. The model decomposes river discharge time series data into multiresolution time series using discrete wavelet transforms. These decomposed time series are then used as inputs to an artificial neural network (ANN) to forecast river flows at different lead times. The results of the WLNN model are compared to those of a single ANN model. The WLNN model is found to provide more accurate and consistent predictions than the ANN model alone due to its use of multiresolution time series data as inputs.
This summarizes a document about a filter-and-refine approach for reducing computational cost when performing correlation analysis on pairs of spatial time series datasets. It groups similar time series within each dataset into "cones" based on spatial autocorrelation. Cone-level correlation computation can then filter out many element pairs whose correlation is clearly below a threshold. The remaining pairs require individual correlation computation in the refinement phase. Experiments on Earth science datasets showed significant computational savings, especially with high correlation thresholds.
System for Prediction of Non Stationary Time Series based on the Wavelet Radi...IJECEIAES
This document proposes and examines the performance of a hybrid time series forecasting model called the wavelet radial bases function neural network (WRBFNN) model. The WRBFNN model uses wavelet transforms to extract coefficients from time series data as inputs to a radial basis function neural network for forecasting. The performance of the WRBFNN model is compared to a wavelet feed forward neural network (WFFNN) model using four types of non-stationary time series data. Results show the WRBFNN model achieves more accurate forecasts than the WFFNN model for certain types of non-stationary data, and trains significantly faster with fewer computational resources required.
Similar to A Singular Spectrum Analysis Technique to Electricity Consumption Forecasting (20)
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsVictor Morales
K8sGPT is a tool that analyzes and diagnoses Kubernetes clusters. This presentation was used to share the requirements and dependencies to deploy K8sGPT in a local environment.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
The CBC machine is a common diagnostic tool used by doctors to measure a patient's red blood cell count, white blood cell count and platelet count. The machine uses a small sample of the patient's blood, which is then placed into special tubes and analyzed. The results of the analysis are then displayed on a screen for the doctor to review. The CBC machine is an important tool for diagnosing various conditions, such as anemia, infection and leukemia. It can also help to monitor a patient's response to treatment.
UNLOCKING HEALTHCARE 4.0: NAVIGATING CRITICAL SUCCESS FACTORS FOR EFFECTIVE I...amsjournal
The Fourth Industrial Revolution is transforming industries, including healthcare, by integrating digital,
physical, and biological technologies. This study examines the integration of 4.0 technologies into
healthcare, identifying success factors and challenges through interviews with 70 stakeholders from 33
countries. Healthcare is evolving significantly, with varied objectives across nations aiming to improve
population health. The study explores stakeholders' perceptions on critical success factors, identifying
challenges such as insufficiently trained personnel, organizational silos, and structural barriers to data
exchange. Facilitators for integration include cost reduction initiatives and interoperability policies.
Technologies like IoT, Big Data, AI, Machine Learning, and robotics enhance diagnostics, treatment
precision, and real-time monitoring, reducing errors and optimizing resource utilization. Automation
improves employee satisfaction and patient care, while Blockchain and telemedicine drive cost reductions.
Successful integration requires skilled professionals and supportive policies, promising efficient resource
use, lower error rates, and accelerated processes, leading to optimized global healthcare outcomes.
john krisinger-the science and history of the alcoholic beverage.pptx
A Singular Spectrum Analysis Technique to Electricity Consumption Forecasting
1. Bisher M. Iqelan. Int. Journal of Engineering Research and Application www.ijera.com
ISSN : 2248-9622, Vol. 7, Issue 3, ( Part -3) March 2017, pp.88-96
www.ijera.com DOI: 10.9790/9622- 0703038896 88 | P a g e
S
A Singular Spectrum Analysis Technique to Electricity
Consumption Forecasting
Bisher M. Iqelan
(Department of Mathematics, the Islamic University of Gaza, Gaza Strip, Palestine)
ABSTRACT
Singular Spectrum Analysis (SSA) is a relatively new and powerful nonparametric tool for analyzing and
forecasting economic data. SSA is capable of decomposing the main time series into independent components
like trends, oscillatory manner and noise. This paper focuses on employing the performance of SSA approach to
the monthly electricity consumption of the Middle Province in Gaza StripPalestine. The forecasting results are
compared with the results of exponential smoothing state space (ETS) and ARIMA models. The three techniques
do similarly well in forecasting process. However, SSA outperforms the ETS and ARIMA techniques according
to forecasting error accuracy measures.
Keywords: SSA, ETS state space, ARIMA, Forecasting, Electricity consumption time series
I. INTRODUCTION
Nowadays, life is impossible without
electricity. Electricity provides homes and public
places with lights and heat. Without it, health,
education, finance, technology and other critical
services collapse. Therefore, electricity consumption
is without a doubt an important issue that has been
of interest over the past few years.
It is always more challenging to examine
densely populated places. For instance, Gaza Strip is
a very small Palestinian territory that is home to a
population of more than 2 million people. As a
result, it suffers from a chronic crisis in the
electricity supply for many years mainly because of
the Israeli siege. This paper focuses on the Middle
Province of Gaza Strip or known as Central Gaza
Strip which consists of the refugee camps of Bureij,
al-Maghazi, and al-Nussairat, and the city of Deir al-
Balah. It attempts to provide a coherent electricity
consumption forecasting for the Middle Province of
Gaza Strip using Singular Spectrum Analysis SSA.
Although it is considered as a new
nonparametric tool, there exists an extensive
literature on Singular Spectrum Analysis (SSA). To
begin with, [1] assert that the role of Singular
Spectrum Analysis (SSA) is to provide estimates of
the statistical dimension. SSA also aims to describe
the main physical phenomena reflected by the data.
It gives adaptive spectral filters connected with the
dominant oscillations of the system, and clarifies the
data’s noise characteristics. Moreover, [2] add that
SSA is both a linear analysis and prediction method.
It is superior to the other classical spectral methods
because of the data-adaptive character of the
eigenelements it is based on. It can also use concepts
from nonlinear dynamics. Adding to that, [3] asserts
that SSA is a powerful tool used in time series
analysis. It gives a much more accurate forecast
results than other methods when applied to many
practical problems. Its aim is to decompose original
series into small number of independent and
interpretable components like oscillatory
components, a structureless noise, and a slowly
varying trend. Moreover, [4] also illustrate that SSA
technique performs four steps. First, computing the
trajectory matrix. Second, constructing a matrix for
applying SVD. Third, grouping and corresponding to
splitting the matrices that were computed at the SVD
step. Finally, reconstructing the one-dimensional
series. In addition, [5] add that SSA has attained
successful application in different branches such as
meteorological, biomechanical, hydrological,
physical sciences, economics and others. The aim of
SSA is to look for nonlinear, non–stationary, and
intermittent or transient behavior in an observed
time series. Furthermore, [6] employ SSA to
decompose the original electricity price series into
trend, periodic and noisy components. This approach
is evaluated by analysing and forecasting the day
ahead electricity prices in the Australian and Spanish
electricity markets. The forecasting results assert the
dominance of the SSA approach compared with the
other forecasting techniques. In addition, [7] assert
that SSA is a powerful and well-developed tool of
time series analysis and forecasting. SSA can be
applied to a wide number of time series analysis
problems like exploratory analysis for data-mining
and parameter estimation in signal processing.
Adding to that [8] also assumes that SSA is a model-
free tool that can be applied to all types of series. It
comprises time series analysis tools, multivariate
statistics tools, dynamical systems and signal
processing tools. Finally, [9] investigate the use of
SSA in mid-term forecasting of the monthly
RESEARCH ARTICLE OPEN ACCESS
2. Bisher M. Iqelan. Int. Journal of Engineering Research and Application www.ijera.com
ISSN : 2248-9622, Vol. 7, Issue 3, ( Part -3) March 2017, pp.88-96
www.ijera.com DOI: 10.9790/9622- 0703038896 89 | P a g e
electricity consumption for the residential class in
Brazil. The results show that the SSA method with
graphical analysis of singular vectors presented the
more accurate forecasts.
The section above contains a summary of
electricity consumption with an application on the
Middle Province of Gaza Strip. Moreover, an
overview of previous literature is also added. The
remainder of the paper is organized as follows.
Section 2 discusses the methodology. Section 3
contains the application to real data. Section 4
consists of comparison between SSA approach and
other well-known models. Section 5 presents the
conclusion of this paper.
II. METHODOLOGY
Consider a real-valued nonzero time series
1 2
, , ,N N
Y y y y of length N . The main
point of SSA is to make a decomposition of the
original time series into the sum of independent
fundamental component parts such as slowly varying
trend, oscillatory component and noise.
The SSA technique consists of two stages:
decomposition and reconstruction and both of which
contain two distinct steps. A brief description of
each stage and a discussion on the methodology of
the SAA technique will be presented in the
following sections following [10] and [11].
2.1 Decomposition
This stage includes two steps: embedding
and Singular Value Decomposition.
2.1.1. Step 1: Embedding:
Embedding can be considered as a
procedure that takes a univariate time series
1 2
, , ,N N
Y y y y and makes it a
multivariate set of observations 1 2
, , , K
X X X
where the lagged vector , 1, 2, ,i
X i K is
defined as 1 1
( , , , )
T L
i i i i L
X y y y R
.
Let (1 )L L N be some integer called window
length (the single parameter of the embedding step)
and 1K N L . As a result, the trajectory
matrix can be constructed as shown below in
equation (1):
1 2
2 3 1
1 2
1
K
K
K
L L N
y y y
y y y
X X X X
y y y
(1)
The trajectory matrix X is a Hankel
matrix that means all the elements on the off
diagonals ( constant)i j are equals [3]. For
example, consider the observations
(10, 20, 30, 40, 50, 60),Y 6N . Choose
window length 4L then 6 4 1 3K ,
and the trajectory matrix is:
1 0 2 0 3 0
2 0 3 0 4 0
3 0 4 0 5 0
4 0 5 0 6 0
X
(2)
2.1.2. Step 2: Singular Value Decomposition (SVD):
From the trajectory matrix X , define the covariance
Matrix
T
X X . Let 1 2 L
denote the
eigenvalues of
T
X X and the corresponding
eigenvectors 1 2
, , , L
U U U in SSA literature are
donated by (the empirical orthogonal functions). Let
d be the number of nonzero eigenvalues (that is the
rank of
T
X X ) and set the principal components
T
i i i
V X U , then the SVD of the trajectory
matrix X can be expressed as a sum of matrices:
1 2 d
X E E E (3)
where , 1, 2, ,
T
i i i i
E U V i d . The
gathering , ,i i i
U V is referred to the
th
i
eigentriple (square root of eigenvalue, eigenvector,
factor vector) of the matrix X . The ratio
1
d
i i
i
is the share of the trajectory matrix X
explained by the sum in (1). For more details see
[12].
2.2 Reconstruction
This stage includes two steps: grouping and
averaging.
2.2.1. Grouping:
The grouping procedure divides the set of
elementary matrices indices 1, ,d
into m disjoint subsets 1
, , m
I I . Let
,1 ,
, ,
k k k p
I i i be a group of indices
,1 ,
, , , 1, ,k k p
i i k m . Then the matrix
kI
X corresponding to the group k
I is defined as
,1 ,2 ,k k k k pI i i i
X E E E . Then the
corresponding decomposition is written as:
1 mI I
X X X (4)
3. Bisher M. Iqelan. Int. Journal of Engineering Research and Application www.ijera.com
ISSN : 2248-9622, Vol. 7, Issue 3, ( Part -3) March 2017, pp.88-96
www.ijera.com DOI: 10.9790/9622- 0703038896 90 | P a g e
For a given group , 1, ,k
I k m the
share of the component kI
X into the equation (3) is
measured by the contribution of the corresponding
eigenvalues:
1k
d
i i
i I i
. For more details see
[13]. Note that in the previous numeric example,
there are only two nonzero components and at most
two groups that can be defined as
1 2
1 , 2I I .
2.2.2. Step 4: Diagonal averaging:
The group of m components chosen in the
above step is used to rebuild the deterministic
components of the original time series N
Y . The aim
of diagonal averaging is to transform all matrix
components 1
, ,
mI I
X X in equation (4) above to
Hankel matrices
1
, ,
mI I
X X where
jI
X is the
Hankelization of
jI
X for 1, ,j m . The
expression (4) is transformed to the Hankelized
form:
1 mI I
X X X (5)
If ij
w stands for an element of a matrix W , then
the
th
k element of the produced time series is
obtained by averaging ij
w over all ,i j such that
1i j k . (For example, the 3rd
element is
obtained by 1,3 2 ,2 3 ,1
( ) 3w w w and so on).
Let 1 2
, , ,j j jN
y y y denote the
reconstructed components of the original time series
corresponding to the Hankel matrix
jI
X for
1, ,j m . Therefore, the original time series
will be decomposed into the sum of m series such
that:
1
, 1, 2 , ,
m
k jk
j
y y k N
(6)
Forming diagonal averaging on the
components of the previous example, the Hankelized
expansion becomes:
1 2
1 0 2 0 3 0
2 0 3 0 4 0
3 0 4 0 5 0
4 0 5 0 6 0
1 5 .3 8 2 1 .6 3 2 8 .7 0 5 .3 8 1 .6 3 1 .3 0
2 1 .6 3 2 8 .7 0 3 8 .2 7 1 .6 3 1 .3 0 1 .7 3
2 8 .7 0 3 8 .2 7 4 9 .9 1 1 .3 0 1 .7 3 0 .0 8
3 8 .2 7 4 9 .9 1 6 2 .3 9 1 .7 3 0 .0 8 2 .3 9
X X X
(7)
Consequently, the reconstructed time series
components are:
1 0 1 5 .3 8 5 .3 8
2 0 2 1 .6 3 1 .6 3
3 0 2 8 .7 0 1 .3 0
4 0 3 8 .2 7 1 .7 3
5 0 4 9 .9 1 0 .0 8
6 0 6 2 .3 9 2 .3 9
Y
(8)
2.3 Forecasting
The basic SSA forecasting method is
known as the recurrent method (R-forecasting). It
uses the concept of Linear Recurrent Formula
(LRF). A time series 1
, ,N N
Y y y satisfies
the LRF and can be successfully forecasted via the
SSA. N
Y recognizes the LRF of order d if:
1 1
, 1, ,t t d t d
y y y t d N
(9)
where 1 2
, , , d
are the coefficients of the
LRF of order d . Wide class of time series is ruled
by LRF such as harmonic, polynomial and
exponential series. Subsequently, there is a
summarized description of the so called SSA
recurrent forecasting algorithm (for more details see
[10]).
The LRF coefficients 1 2
, , , d
in the
linear combination (9) can be obtained by using the
eigenvectors acquired in the SVD step of the SSA
algorithm. Let j
U
denote the vector of the 1st
1L components of the eigenvector j
U and j
is
the last component of (fo r 1, , , )j
U j r r L .
The LRF coefficients can be calculated as follows:
1 2 1 2
1
1
( , , , )
1
r
T
L j j
j
U
(10)
where
2 2
1
r
j
j
.
4. Bisher M. Iqelan. Int. Journal of Engineering Research and Application www.ijera.com
ISSN : 2248-9622, Vol. 7, Issue 3, ( Part -3) March 2017, pp.88-96
www.ijera.com DOI: 10.9790/9622- 0703038896 91 | P a g e
Considering these notations, the time series
1
ˆ ˆ ˆ( , , )N h N h
Y y y
can be defined by the
following formula:
1
1
fo r 1, ,
ˆ
fo r 1, ,
i
L
i
j i j
j
y i N
y
y i N N h
(11)
where ˆ ( 1, , )i
y i N are the reconstructed
time series. Then 1
ˆ ˆ, ,N N h
y y
are the h -step-
ahead recurrent forecast. Regarding the previous
numerical example, let 3r L . Then the LRF
coefficients can be calculated and have the values:
1 2 3
( , , ) ( 1 .5 0 , 2 .0 0 , 0 .5 0 )
T T
(12)
and the 1st-step-ahead forecast is then:
7
ˆ 0.50 * 62.39 2.00 * 49.91
( 1.50) * 38.27 73.61.
y
(13)
2.4 Parameters Selection
As explained in the previous sections, SSA
approach requires two important parameters: the
window length L (in the embedding step) and the
grouping effect parameter r (in the reconstruction
stage). Certainly, there are great efforts and
suggestions of various methods of selecting the
suitable value of L and r (see [10], [14], [15] and
[16]). Taking into account theoretical results for
structure of the trajectory matrix and seprabelity, it
seems better to select window length less than
2N or 3N as other recommended values (see
[17]). Moreover, values for L and r be selected
depend on both information provided by time series
under study and analysis that need to be performed.
In this paper, SSA is used as a technique of
forecasting. Therefore, the implemented criteria are
based on the forecasting errors. However, different
error measures are used to find the error forecast.
These include Mean absolute error (MAE), Mean
square error (MSE), Root mean square error
(RMSE) and Mean absolute percentage error
(MAPE). These accuracy measures are defined in
Table 1 below.
Table 1 Accuracy Measures
Acronyms Definition Formula
MAE Mean absolute
error 1
| |
n
ti
e n
MSE Mean square error 2
1
n
ii
e n
RMSE Root mean square
error M S E
MAPE Mean absolute
percentage error
1
| |
100
n
i i
i
e y
n
where n is the number of observations in the
sample. Note that the selection of L and r is
obtained by minimizing the forecasting error using
these accuracy measures (see [16] and [18]).
Figure 1. Monthly electricity consumption in the
middle area of Gaza Strip (2005:11-2016:12)
III. APPLICATION TO REAL DATA
SSA approach is used to decompose and
forecast the monthly electricity consumption series
(in GWh) in the Middle Province of Gaza Strip. The
time series contains 134 observations from
November 2005 to December 2016. We have
transformed the data by dividing it by 1000000.
Table 2 shows some descriptive statistics for this
data set. On the other hand, Fig. 1 displays the
electricity consumption over the period (2005:11 -
2016:12).
Table 2. Descriptive Statistics for Monthly
Electricity Consumption (in GWh) for the Middle
Area of Gaza 2005:11-2016:12
n Min. 1st
Qu
Median Mean 3rd
Qu
Max.
134 5.26 10.22 13.45 12.81 15.56 20.02
Visual analysis of the drawn time series in
Fig. 1 specifies that it has a trend and this trend can
be resembled either by exponentially increasing
function or linear function. Moreover, it seems that
the seasonal component has a complicated and
changeable attitude. Fig. 2 displays the periodogram
of electricity consumption series which emphasizes
this assumption. The periodogram is established for
purifying and revising the representation of the time
series.
5. Bisher M. Iqelan. Int. Journal of Engineering Research and Application www.ijera.com
ISSN : 2248-9622, Vol. 7, Issue 3, ( Part -3) March 2017, pp.88-96
www.ijera.com DOI: 10.9790/9622- 0703038896 92 | P a g e
Figure 2. Periodogram of series electricity
consumption (the first 70 points)
Moreover, the first 122 observations from
November 2005 to December 2015 are used as a
training sample for model prediction and the
remaining 12 observations from January 2016 to
December 2016 are used as a testing data set to
assess the electricity consumption forecasts. Results
from SSA technique are compared with other
models’ results, like Box-Jenkins (ARIMA) model
and Exponential Smoothing (ETS) model. SSA
computations are implemented by using the RSSA
package in R (see [19]), while the other two models
performed using the forecast package [20]. For each
approach, the following performance indexes related
to the forecasting errors are offered: MAE, MSE,
RMSE and MAPE, and residuals test statistics.
3.1. Electricity consumption by SSA
As explained in the previous section, it is
required as a first step to select the proper
parameters, i.e., the window length L and the group
indices r . As mentioned before, the suitable values
of L and r are that allowing the minimal values of
the accuracy measures of Table 1.
The process of selection L and r carried
out through two steps. The first step is fixing
, 2 2L L N and changing the value of
, 1 1r r L until reaching the choice of the
pair ( , )L r which minimizes the accuracy
measures. Repeat this step many times with different
choices of L . The second step is choosing the
optimal pair ( , )L r , throughout all pairs of the first
step, which provide minimum accuracy measures.
Table 3 shows different choices of ( , )L r and the
corresponding values of the accuracy measures (all
values are rounded).
Table 3 Different selections of ( , )L r with
corresponding values of accuracy measures
( , )L r MAE MSE RMSE MAPE
(37,3) 1.568 3.952 1.988 0.102
(38,9) 1.516 3.914 1.978 0.101
(39,7) 1.501 3.622 1.903 0.097
(40,7) 1.416 3.560 1.887 0.094
(41,7) 1.454 4.014 2.004 0.097
(42,12) 1.445 3.417 1.848 0.095
(55,4) 1.528 4.134 2.033 0.102
(56,4) 1.488 3.886 1.971 0.099
Note that many other selections and their
corresponding values of accuracy measures are also
computed but they are improper so ignored and not
reported. Comparing the values in Table 3 concludes
that the best selection of SSA parameters of
electricity consumption time series is that 40L
and 7r . Fig. 3 illustrates eigenvectors from the
first 12 eigentriples. It shows the participation of
each eigenvalue after the SVD stage using 40L
as window length required for the embedding step.
Figure 3. The first 12 principal components plotted
as time series.
Fig. 4 displays the plot of the reconstructed
components. It describes the 12 most significant
initial reconstructed components of the original
electricity consumption time series. After taking a
quick look, the first two reconstructed components
are related to slow motion components (i.e., the
trend behavior), while the remainder of the
reconstructed components are connected to
fluctuating components.
6. Bisher M. Iqelan. Int. Journal of Engineering Research and Application www.ijera.com
ISSN : 2248-9622, Vol. 7, Issue 3, ( Part -3) March 2017, pp.88-96
www.ijera.com DOI: 10.9790/9622- 0703038896 93 | P a g e
Figure 4. Reconstructed components related to the
first 12 eigentriples
Furthermore, for the sake of evaluating the
separability of the various eigentriples, Fig. 5 displays
a graphical representation of the w-correlation matrix.
The correlation between components m and n can be
represented as a cell ( , )m n
F F . It has a color scale
from black to white equivalent to 1 to 0, respectively.
Notice that cell 1 2
( , )F F is white (w-correlation=0), so
components 1 and 2 are clearly separable whereas
cell 3 4
( , )F F are grey color (w-correlation=0.25), so
components 3 and 4 are almost separable. Components
5-6 and 7-8 have high w-correlations so they could be
grouped. Also, the first two eigentriples in Fig. 5
correspond to the trend components while the next
eigentriples produce harmonic (oscillatory)
components. The large shining cells on the top-right
shows noise components. Therefore, the first 7
eigentriples will be used in the reconstruction of the
electricity consumption series.
IV. COMPARISON
In this section, the SSA approach is
compared with several well-known models, the Box-
Jenkins model namely, Autoregressive Integrated
Moving Average (ARIMA) models and the
Exponential Smoothing (ETS) models.
Figure 5. A graphical representation of the w-
correlation matrix.
4.1. ARIMA Model
A non-stationary time series t
X is said
to follow a non-stationary autoregressive integrated
moving average (ARIMA) denoted by
A R IM A ( , , )p d q if it is expressed as:
( ) ( )
d
p t q t
Y B B (14)
where t
are identically and independently
distributed as 2
0, , 1, 2, ,N t N and
N is the number of observations, d is the order of
non-seasonal differences and is the non-seasonal
differencing operator, 1 B . µ is the mean of
a series assuming that after differencing it is
stationary.
As mentioned above, B is the backshift
operator, used to simplify the representation of lag
values, by 1t t
X X
B . Also, ( )p
B and
( )q
B are the autoregressive polynomial of B for
order p and the moving average polynomial of B
for order q respectively where:
2
1 2
) 1(
p
p p
BB B B (15)
and
2
1 2
1( )
q
q q
B B B B (16)
More details can be found in ([21], [22], [23], [24],
and [25]).
4.1.1. Parameters Estimation
The estimation of parameters for ARIMA
model is a nonlinear problem that requires
some special processes such as the maximum
likelihood method or nonlinear least-squares
estimation. At this stage of model building, the
estimated parameter values should minimize
the sum of squared residuals. For this purpose, many
software packages are applicable for
fitting ARIMA models. In this current study, the
forecast package in R software will be used.
To choose the best ARIMA model based on
observation data, we use the corrected
Akaike Information Criterion (AICc). For more
details see [21].
Based on the methodology summarized
above, to calculate the point forecasts for electricity
consumption in the Middle Province of Gaza Strip
for the 12 months from January 2016 to December
2016, the results showed that the best ARIMA
model was A R IM A (1 ,1 ,4 ) :
4
2
(1 1 .2 0 1 0 .2 0 1 ) 0 .0 2 8 ( )t t
Y B B B (17)
where
2
4
3 4
( ) (1 0.202 0.102 0.116 0.100 ) B B B B B (18)
7. Bisher M. Iqelan. Int. Journal of Engineering Research and Application www.ijera.com
ISSN : 2248-9622, Vol. 7, Issue 3, ( Part -3) March 2017, pp.88-96
www.ijera.com DOI: 10.9790/9622- 0703038896 94 | P a g e
4.2. Exponential Smoothing State Space Model
Another popular type of forecasting models
are exponential smoothing techniques. It’s simple
but very helpful of adjusting time series forecasting.
[26], [27], and [28] have initially introduced these
methods in their early works. The idea behind
forecasting using exponential smoothing is to assign
exponentially declining weights to observations as
they go back in age, i.e., recent observations have a
larger weight than the old ones.
Pegels [29] was the first to classify
exponential smoothing techniques and propose a
taxonomy of the trend component and seasonal
component. Pegels’ taxonomy was later extended by
[30], who added damped trend to the classification.
This extension is then modified by [31], before the
final extension is proposed by [32] who extended the
classification to include damped multiplicative
trends. For more details, see [33].
When fitting an exponential smoothing
model with a state space approach, the ets() function
in forecast package [20] and [34] was used. It is
utilized to choose the suitable model automatically
on the basis of maximum likelihood method (MLE)
and then to calculate the point forecasts for
electricity consumption in the Middle Area of Gaza
Strip for the 12 months from January 2016 to
December 2016. When applying the function, the
results showed that the best performing model
was E T S(M , A , N ) , that is a model with
multiplicative error, additive trend and no
seasonality. The exponential smoothing state space
E T S(M , A , N ) model is:
4.689
1 1 (1 ),
0.118
1 1 4.689 4.689 0.908
1 1
0 1 0.118 0.118 0.0001
t t
t t
y
x
(19)
where the state vector ( , )
T
t t t
x b , includes the
level and growth components respectively. For more
details about the structures and notations of
exponential smoothing state space models, see [33].
4.3. Forecasting Results
Table 4 presents the results for several
techniques for the forecasting of the 12-step-ahead
data points for electricity consumption in the Middle
Area of Gaza Strip. The scores show that the SSA
forecasts are comparable with the forecasts acquired
from the Box-Jenkins A R IM A (1 ,1 ,4 ) and the
exponential smoothing state space E T S(M , A , N )
models. Moreover, the SSA forecasts outperform the
forecasts produced by the ARIMA and exponential
smoothing models and consequently the forecasted
values for electricity consumption of the Middle
Province in Gaza Strip are very close to the original
data for the SSA technique. Adding to that, SSA is
able to provide further details about the
decomposition of the time series.
Comparing the results of the measures of
forecast accuracy in Table 5, the SSA technique still
performs best based on all the measures values. The
predicted values using SSA are highly accurate as
MSE=3.5604 is approximately 3 times less than the
other models. RMSE = 1.8869 is 1.5 times less than
the other models and MAPE = 0.0938 is
approximately 2 times less than the other models.
Table 4. Actual and Prediction for Electricity
Consumption (in GWh) for the Middle Province of
Gaza Strip in 2.16:1-2016:12
Prediction
Month Actual
Data
SSA ARIMA
(1,1,4)
ETS(M,
A,N)
01/2016 14.1886 17.2325 18.4250 18.4873
02/2016 12.5564 16.9441 19.1469 18.6053
03/2016 16.4811 16.8670 18.1730 18.7232
04/2016 15.1924 16.9982 18.7956 18.8412
05/2016 16.0113 17.2629 18.5982 18.9592
06/2016 17.4678 17.5599 18.8822 19.0771
07/2016 19.0960 17.8049 18.8836 19.1951
08/2016 18.1212 17.9669 19.0509 19.3131
09/2016 17.6333 18.0759 19.1208 19.4310
10/2016 16.0417 18.2045 19.2478 19.5490
11/2016 20.0189 18.4360 19.3413 19.6670
12/2016 19.2120 18.8230 19.4545 19.7849
By all the odds, the SSA technique has the lowest
forecasting accuracy measures.
Table 5 Forecast Accuracy for the SSA, ARIMA
and Exponential Smoothing Techniques
MAE MSE RMSE MAPE
SSA 1.4158 3.5604 1.8869 0.0938
ARIMA(1,1,4) 2.2399 8.3199 2.8844 0.1499
ETS(M,A,N) 2.3597 8.5087 2.9170 0.1563
Fig. 5 exhibits the actual and forecasted
electricity consumption in the Middle Province of
Gaza Strip, as well as 80% and 95% prediction
intervals. Therefore, a more detailed picture of the
SSA technique can be drawn by diving into the
numbers of the previous tables.
8. Bisher M. Iqelan. Int. Journal of Engineering Research and Application www.ijera.com
ISSN : 2248-9622, Vol. 7, Issue 3, ( Part -3) March 2017, pp.88-96
www.ijera.com DOI: 10.9790/9622- 0703038896 95 | P a g e
Figure 5. Actual and forecasted (red line) electricity
consumption in the Middle Province of Gaza Strip,
80% and 95% (blue) prediction intervals.
However, to make prediction intervals
using exponential smoothing techniques, the
prediction intervals require that the forecast errors
should be uncorrelated and normally distributed with
mean zero and constant variance.
The sample correlation in Fig. 6 displays
that most of sample autocorrelation coefficients of
the residuals are within the confidence limits,
consequently the residuals are white noise reflecting
that SSA technique is adequate.
Figure 6. Residual diagnostics for SSA technique.
To confirm the evidence of autocorrelations,
the Box-Ljung test p-value result in Table 6 displays
that there is an evidence of no autocorrelations in the
forecasts errors. Furthermore, the Jarque Bera test p-
value shows that there is a strong evidence for
normality of forecasts errors. Moreover, the test of
homoscedasticity p-value proves the evidence that the
variance is constant.
Table 6. Residual Diagnostics Tests
Residual Test p-values
Residual Autocorrelation test
Box-Ljung test
08
8.243 * 10
Residual Normality test
Jarque Bera Test
0.01426
Homoscedasticity test
Box-Ljung test (Squared
Residuals)
0.04635
V. CONCLUSION
The main objective of this research is to
clarify the methodology of Singular Spectrum
Analysis and explain that SSA can be successfully
utilized to analyze and predict the monthly
electricity consumption in the Middle Province of
Gaza StripPalestine. This paper has explained that
the SSA technique is a very powerful appliance for
decomposing and forecasting a non- linear and/or a
non-stationary time series into a collection of
independent components. In the given example,
regarding the monthly electricity consumption, the
performance of SSA is the leading, compared with
other well-known forecasting models, namely Box-
Jenkins ARIMA model and Exponential Smoothing
State Space ETS model. The numerical results
obtained using R packages declared that the error
came by the SSA technique were smaller than those
obtained by the ARIMA and ETS state space models
according to mean absolute error (MAE), mean
square error (MSE), root mean square error (RMSE)
and mean absolute percentage error (MAPE).
However, the SSA outperforms the ARIMA and
ETS state space models. Adding to that, the obtained
numerical results assert the potentiality of the SSA
technique for electricity consumption predicting
implementations.
REFERENCES
[1] R. Vautard and M. Ghil, Singular Spectrum
Analysis in Nonlinear Dynamics, With
Applications to Paleoclimatic Time Series,
Physica D, 35, 1989, 395-424.
[2] R. Vautard, P. Yiou and M. Ghil, Singular-
Spectrum Analysis: A Toolkit for Short,
Noisy Chaotic Signals, Physica D, 58, 1992,
95-126.
[3] H. Hassani, Singular Spectrum Analysis:
Methodology and Comparison, Journal of
Data Science, 5(2), 2007, 239–257.
[4] N. Simões, L. Wang, S. Ochoa1, J. P. Leitão,
R. Pina, C. Onof, A. Sá Marques, Č.
Maksimović, R. Carvalho, L. David, A
Coupled SSA-SVM Technique for Stochastic
Short-Term Rainfall Forecasting, 12nd
International Conference on Urban Drainage,
Porto Alegre, 2011, 1-8.
[5] M. Zokaei, R. Mahmoudvand, N. Najari,
Comparison of Singular Spectrum Analysis
and ARIMA, Proc. 58th World Statistical
Congress, Dublin, 2011, 3991- 3996.
[6] A. Miranian1, M. Abdollahzade, H. Hassani,
Day-Ahead Electricity Price Analysis and
Forecasting by Singular Spectrum Analysis,
IET Generation, Transmission &
Distribution, 2012, 7(4), 2012, 337–346.
[7] N. Golyandina and A. Korobeynikov, Basic
Singular Spectrum Analysis and forecasting
9. Bisher M. Iqelan. Int. Journal of Engineering Research and Application www.ijera.com
ISSN : 2248-9622, Vol. 7, Issue 3, ( Part -3) March 2017, pp.88-96
www.ijera.com DOI: 10.9790/9622- 0703038896 96 | P a g e
with R, Computational Statistics and Data
Analysis, 71, 2013, 934–954.
[8] C. Deng, Time Series Decomposition Using
Singular Spectrum Analysis, Master Thesis,
the Department of Mathematics,
East Tennessee State University, 2014.
[9] M. Lima de Menezes, R. Castro Souza and J.
Francisco Moreira Pessanha, Electricity
Consumption Forecasting Using Singular
Spectrum Analysis, Universidad Nacional de
Colombia, 82 (190), 2015, 138-146.
[10] N. Golyandina, V. Nekrutkin, and A.
Zhigljavski, Analysis of Time Series
Structure: SSA and Related Techniques (Boca
Raton: CRC Press, 2001).
[11] N. Golyandina, A. Zhigljavsky, Singular
Spectrum Analysis for Time Series (Springer
Science & Business Media, 2013).
[12] H. Hassani, S. Heravi, and A. Zhigljavsky,
Forecasting European Industrial Production
with Singular Spectrum Analysis,
International Journal of Forecasting, 25,
2009, 103–118.
[13] S. Rocco, Singular Spectrum Analysis and
Forecasting of Failure Time Series, Reliability
Engineering and System Safety, 114, 2013,
126–136.
[14] H. Hassani, R. Mahmoudvand and M. Zokaei,
Separability and Window Length in Singular
Spectrum Analysis, C. R. Acad. Sci. Paris,
Ser. I, 349, 2011, 987–990.
[15] R. Mahmoudvand and M. Zokaei, On the
Singular Values of the Hankel Matrix with
Application in Singular Spectrum Analysis,
Chilean Journal of Statistics 3, 2012, 43-56.
[16] R. Mahmoudvand, F. Alehosseini, M. Zokaei,
Feasibility of Singular Spectrum Analysis in
the Field of Forecasting Mortality Rate,
Journal of Data Science, 11, 2013, 851-866.
[17] N. Golyandina, A. Korobeynikov, Basic
Singular Spectrum Analysis and forecasting
with R, Computational Statistics & Data
Analysis, 71, 2014, 934-954.
[18] M. Ghodsi and M. Yarmohammadi, Exchange
Rate Forecasting With Optimum Singular
Spectrum Analysis, Journal of Systems
Science and Complexity – Springer, 27, 2014,
47–55.
[19] A. Korobeynikov, Computation- and space-
efficient implementation of SSA, Statistics
and Its Interface, 3(3), 2010, 257-368.
[20] Hyndman RJ. Forecast: Forecasting Functions
for Time Series and Linear Models. R
package version 8.0. 2017.
[21] B. M. Iqelan, Time Series Modeling of
Monthly Temperature Data of
Jerusalem/Palestine, MATEMATIKA, 31(2),
2015, 159–176.
[22] J. D. Cryer and K. Chan, Time Series Analysis
with Applications in R. (Springer, Second
edition, 2008).
[23] D.S. Shumway and D. S. Stoffer, Time Series
Analysis and Its Applications: With R
Examples (Springer, third edition, 2011).
[24] G. Box and G. Jenkins, Time Series Analysis:
Forecasting and Control (Holden-Day, the
University of Michigan, 1976).
[25] P. Brockwell and R. Davis, Time Series:
Theory and Methods (Springer; 2nd ed.
1991).
[26] R. G. Brown, Statistical Forecasting for
Inventory Control (McGraw-Hill, New York.
1959).
[27] C. C. Holt, Forecasting Trends and Seasonals
by Exponentially Weighted Averages (O.N.R.
Memorandum 52/1957, Carnegie Institute of
Technology. 1957).
[28] P. R. Winters, Forecasting Sales by
Exponentially Weighted Moving Averages,
Management Science, 6, 1960, 324–342.
[29] C. C. Pegels, Exponential Forecasting: Some
New Variations. Management Science 15(5),
1969, 311–315.
[30] Jr. Gardner, Exponential Smoothing: The
State of the Art, Journal of Forecasting 4,
1985, 1–28.
[31] R. J. Hyndman, A. B. Koehler, R. D. Snyder
and S. Grose, A State Space Framework for
Automatic Forecasting Using Exponential
Smoothing Methods, International Journal of
Forecasting, 18(3), 2002, 439–454.
[32] J. W. Taylor, Exponential Smoothing with a
Damped Multiplicative Rend, International
Journal of Forecasting, 19, 2003, 715–725.
[33] B. M. Iqelan, Comparison of Parametric and
Nonparametric Techniques for Water
Consumption Forecasting, International
Journal of Scientific & Engineering Research,
8(1), 2017, 1530-1536.
[34] R.J. Hyndman and Y. Khandakar, Automatic
Time Series Forecasting: The Forecast
Package for R, Journal of Statistical
Software, 26(3), 2008, 1–22