This paper proposes a hybrid approach based on local linear neuro fuzzy (LLNF) model and fuzzy transform (F-transform), termed FT-LLNF, for prediction of energy consumption. LLNF models are powerful in modelling and forecasting highly nonlinear and complex time series. Starting from an optimal
linear least square model, they add nonlinear neurons to the initial model as long as the model's accuracy is improved. Trained by local linear model tree learning (LOLIMOT) algorithm, the LLNF models provide maximum generalizability as well as the outstanding performance. Besides, the recently introduced technique of fuzzy transform (F-transform) is employed as a time series pre-processing method. The
technique of F-transform, established based on the concept of fuzzy partitions, eliminates noisy variations of the original time series and results in a well-behaved series which can be predicted with higher accuracy. The proposed hybrid method of FT-LLNF is applied to prediction of energy consumption in the
United States and Canada. The prediction results and comparison to optimized multi-layer perceptron (MLP) models and the LLNF itself, revealed the promising performance of the proposed approach for energy consumption prediction and its potential usage for real world applications.
MFBLP Method Forecast for Regional Load Demand SystemCSCJournals
Load forecast plays an important role in planning and operation of a power system. The accuracy of the forecast value is necessary for economically efficient operation and also for effective control. This paper describes a method of modified forward backward linear predictor (MFBLP) for solving the regional load demand of New South Wales (NSW), Australia. The method is designed and simulated based on the actual load data of New South Wales, Australia. The accuracy of discussed method is obtained and comparison with previous methods is also reported.
Fractal representation of the power demand based on topological properties of...IJECEIAES
In a power system, the load demand considers two components such as the real power (P) because of resistive elements, and the reactive power (Q) because inductive or capacitive elements. This paper presents a graphical representation of the electric power demand based on the topological properties of the Julia Sets, with the purpose of observing the different graphic patterns and relationship with the hourly load consumptions. An algorithm that iterates complex numbers related to power is used to represent each fractal diagram of the load demand. The results show some representative patterns related to each value of the power consumption and similar behaviour in the fractal diagrams, which allows to understand consumption behaviours from the different hours of the day. This study allows to make a relation among the different consumptions of the day to create relationships that lead to the prediction of different behaviour patterns of the curves.
AC-Based Differential Evolution Algorithm for Dynamic Transmission Expansion ...TELKOMNIKA JOURNAL
This work proposes a method based on a mixed integer nonlinear non-convex programming
model to solve the multistage transmission expansion planning (TEP). A meta-heuristic algorithm by the
means of differential evolution algorithm (DEA) is employed as an optimization tool. An AC load flow model
is used in solving the multistage TEP problem, where accurate and realistic results can be obtained.
Furthermore, the work considers the constraints checking and system violation such as real and power
generation limits, possible number of lines added, thermal limits and bus voltage limits. The proposed
technique is tested on well known and realistic test systems such as the IEEE 24 bus-system and the
Colombian 93-bus system. The method has shown high capability in considering the active and reactive
power in the same manner and solving the TEP problem. The method produced improved good results in
a fast convergence time for the test systems.
Bi-objective Optimization Apply to Environment a land Economic Dispatch Probl...ijceronline
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
The optimal synthesis of scanned linear antenna arrays IJECEIAES
In this paper, symmetric scanned linear antenna arrays are synthesized, in order to minimize the side lobe level of the radiation pattern. The feeding current amplitudes are considered as the optimization parameters. Newly proposed optimization algorithms are presented to achieve our target; Antlion Optimization (ALO) and a new hybrid algorithm. Three different examples are illustrated in this paper; 20, 26 and 30 elements scanned linear antenna array. The obtained results prove the effectiveness and the ability of the proposed algorithms to outperform and compete other algorithms like Symbiotic Organisms Search (SOS) and Firefly Algorithm (FA).
Fault diagnosis using genetic algorithms and principal curveseSAT Journals
Abstract Several applications of nonlinear principal component analysis (NPCA) have appeared recently in process monitoring and fault diagnosis. In this paper a new approach is proposed for fault detection based on principal curves and genetic algorithms. The principal curve is a generation of linear principal component (PCA) introduced by Hastie as a parametric curve passes satisfactorily through the middle of data. The existing principal curves algorithms employ the first component of the data as an initial estimation of principal curve. However the dependence on initial line leads to a lack of flexibility and the final curve is only satisfactory for specific problems. In this paper we extend this work in two ways. First, we propose a new method based on genetic algorithms to find the principal curve. Here, lines are fitted and connected to form polygonal lines (PL). Second, potential application of principal curves is discussed. An example is used to illustrate fault diagnosis of nonlinear process using the proposed approach. Index Terms: Principal curve, Genetic Algorithm, Nonlinear principal component analysis, Fault detection.
New emulation based approach for probabilistic seismic demandSebastian Contreras
This document describes a new statistical emulation approach for probabilistic seismic demand assessment. The approach uses Gaussian process emulation to model the relationship between engineering demand parameter (EDP) and intensity measure (IM), as an alternative to the standard cloud method. The emulator is trained using data from nonlinear time-history analyses of a case study building. Two "assumed realities" are generated to test the emulator's performance compared to the cloud method. Results show the emulator approach provides improved coverage probability and average length over the cloud method, while maintaining similar accuracy. The emulator is more flexible than the cloud method and can better estimate EDP-IM relationships that do not follow a power law.
Applying of Double Seasonal ARIMA Model for Electrical Power Demand Forecasti...IJECEIAES
The prediction of the use of electric power is very important to maintain a balance between the supply and demand of electric power in the power generation system. Due to a fluctuating of electrical power demand in the electricity load center, an accurate forecasting method is required to maintain the efficiency and reliability of power generation system continuously. Such conditions greatly affect the dynamic stability of power generation systems. The objective of this research is to propose Double Seasonal Autoregressive Integrated Moving Average (DSARIMA) to predict electricity load. Half hourly load data for of three years period at PT. PLN Gresik Indonesia power plant unit are used as case study. The parameters of DSARIMA model are estimated by using least squares method. The result shows that the best model to predict these data is subset DSARIMA with order ( [ 1,2,7,16,18,35,46 ] , 1, [ 1,3,13,21,27,46 ] )( 1,1,1 ) 48 ( 0,0,1 ) 336 with MAPE about 2.06%. Thus, future research could be done by using these predictive results as models of optimal control parameters on the power system side.
MFBLP Method Forecast for Regional Load Demand SystemCSCJournals
Load forecast plays an important role in planning and operation of a power system. The accuracy of the forecast value is necessary for economically efficient operation and also for effective control. This paper describes a method of modified forward backward linear predictor (MFBLP) for solving the regional load demand of New South Wales (NSW), Australia. The method is designed and simulated based on the actual load data of New South Wales, Australia. The accuracy of discussed method is obtained and comparison with previous methods is also reported.
Fractal representation of the power demand based on topological properties of...IJECEIAES
In a power system, the load demand considers two components such as the real power (P) because of resistive elements, and the reactive power (Q) because inductive or capacitive elements. This paper presents a graphical representation of the electric power demand based on the topological properties of the Julia Sets, with the purpose of observing the different graphic patterns and relationship with the hourly load consumptions. An algorithm that iterates complex numbers related to power is used to represent each fractal diagram of the load demand. The results show some representative patterns related to each value of the power consumption and similar behaviour in the fractal diagrams, which allows to understand consumption behaviours from the different hours of the day. This study allows to make a relation among the different consumptions of the day to create relationships that lead to the prediction of different behaviour patterns of the curves.
AC-Based Differential Evolution Algorithm for Dynamic Transmission Expansion ...TELKOMNIKA JOURNAL
This work proposes a method based on a mixed integer nonlinear non-convex programming
model to solve the multistage transmission expansion planning (TEP). A meta-heuristic algorithm by the
means of differential evolution algorithm (DEA) is employed as an optimization tool. An AC load flow model
is used in solving the multistage TEP problem, where accurate and realistic results can be obtained.
Furthermore, the work considers the constraints checking and system violation such as real and power
generation limits, possible number of lines added, thermal limits and bus voltage limits. The proposed
technique is tested on well known and realistic test systems such as the IEEE 24 bus-system and the
Colombian 93-bus system. The method has shown high capability in considering the active and reactive
power in the same manner and solving the TEP problem. The method produced improved good results in
a fast convergence time for the test systems.
Bi-objective Optimization Apply to Environment a land Economic Dispatch Probl...ijceronline
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
The optimal synthesis of scanned linear antenna arrays IJECEIAES
In this paper, symmetric scanned linear antenna arrays are synthesized, in order to minimize the side lobe level of the radiation pattern. The feeding current amplitudes are considered as the optimization parameters. Newly proposed optimization algorithms are presented to achieve our target; Antlion Optimization (ALO) and a new hybrid algorithm. Three different examples are illustrated in this paper; 20, 26 and 30 elements scanned linear antenna array. The obtained results prove the effectiveness and the ability of the proposed algorithms to outperform and compete other algorithms like Symbiotic Organisms Search (SOS) and Firefly Algorithm (FA).
Fault diagnosis using genetic algorithms and principal curveseSAT Journals
Abstract Several applications of nonlinear principal component analysis (NPCA) have appeared recently in process monitoring and fault diagnosis. In this paper a new approach is proposed for fault detection based on principal curves and genetic algorithms. The principal curve is a generation of linear principal component (PCA) introduced by Hastie as a parametric curve passes satisfactorily through the middle of data. The existing principal curves algorithms employ the first component of the data as an initial estimation of principal curve. However the dependence on initial line leads to a lack of flexibility and the final curve is only satisfactory for specific problems. In this paper we extend this work in two ways. First, we propose a new method based on genetic algorithms to find the principal curve. Here, lines are fitted and connected to form polygonal lines (PL). Second, potential application of principal curves is discussed. An example is used to illustrate fault diagnosis of nonlinear process using the proposed approach. Index Terms: Principal curve, Genetic Algorithm, Nonlinear principal component analysis, Fault detection.
New emulation based approach for probabilistic seismic demandSebastian Contreras
This document describes a new statistical emulation approach for probabilistic seismic demand assessment. The approach uses Gaussian process emulation to model the relationship between engineering demand parameter (EDP) and intensity measure (IM), as an alternative to the standard cloud method. The emulator is trained using data from nonlinear time-history analyses of a case study building. Two "assumed realities" are generated to test the emulator's performance compared to the cloud method. Results show the emulator approach provides improved coverage probability and average length over the cloud method, while maintaining similar accuracy. The emulator is more flexible than the cloud method and can better estimate EDP-IM relationships that do not follow a power law.
Applying of Double Seasonal ARIMA Model for Electrical Power Demand Forecasti...IJECEIAES
The prediction of the use of electric power is very important to maintain a balance between the supply and demand of electric power in the power generation system. Due to a fluctuating of electrical power demand in the electricity load center, an accurate forecasting method is required to maintain the efficiency and reliability of power generation system continuously. Such conditions greatly affect the dynamic stability of power generation systems. The objective of this research is to propose Double Seasonal Autoregressive Integrated Moving Average (DSARIMA) to predict electricity load. Half hourly load data for of three years period at PT. PLN Gresik Indonesia power plant unit are used as case study. The parameters of DSARIMA model are estimated by using least squares method. The result shows that the best model to predict these data is subset DSARIMA with order ( [ 1,2,7,16,18,35,46 ] , 1, [ 1,3,13,21,27,46 ] )( 1,1,1 ) 48 ( 0,0,1 ) 336 with MAPE about 2.06%. Thus, future research could be done by using these predictive results as models of optimal control parameters on the power system side.
The document analyzes crop yield data from spatial locations in Guntur District, Andhra Pradesh, India using hybrid data mining techniques. It first applies k-means clustering to the dataset, producing 5 clusters. It then applies the J48 classification algorithm to the clustered data, resulting in a decision tree that predicts cluster membership based on attributes like crop type, irrigated area, and latitude. Analysis found irrigated areas of cotton and chilies increased from 2007-2008 to 2011-2012. Association rule mining on the clustered data also found relationships between productivity and location attributes. The hybrid approach of clustering followed by classification effectively analyzed the spatial agricultural data.
The document is a final report for an optimal system operation project. It was authored by Oswaldo Guerra Gomez, a student at Saxion University of Applied Sciences in the Netherlands under the supervision of Mr. Nguyen Trung Thang and Mr. Jan Bollen. The report details the student's research and implementation of the Flower Pollination Algorithm (FPA) to solve the economic load dispatch (ELD) problem of minimizing power generation costs while meeting demand. The student analyzed various optimization algorithms, programmed FPA in MATLAB, and tested it on standard functions and a 6-unit power system, finding it outperformed other algorithms in accuracy and speed. The report also includes a documentary about Vietnam made during the
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Use of Evolutionary Polynomial Regression (EPR) for Prediction of Total Sedim...CSCJournals
This study presents the use of Evolutionary Polynomial Regression (EPR) in predicting the total sediment load of ten selected rivers in Malaysia. EPR is a data-driven hybrid technique, based on evolutionary computing. In order to apply the method, the extensive database of the Department of Irrigation and Drainage (DID), Ministry of Natural Resources & Environment, Malaysia was sought, and unrestricted access was granted. The EPR technique produced greatly improved results compared to other previous sediment load methods. A robustness study was performed in order to confirm the generalisation ability of the developed EPR model, and a sensitivity analysis was also conducted to determine the relative importance of model inputs. The performance of the EPR model demonstrates its predictive capability and generalisation ability to solve highly nonlinear problems of river engineering applications, such as sediment.
Comparative study of methods for optimal reactive power dispatchelelijjournal
Reactive power dispatch plays a main role in order to provide good facility secure and economic operation
in the power system. In a power system optimal reactive power dispatch is supported to improve the voltage
profile, to reduce losses, to improve voltage stability, to reduce cost etc. This paper presents a brief literature survey of reactive power dispatch and also discusses a comparative study of conventional and evolutionary computation techniques applied for reactive power dispatch. The paper is useful for researchers for further research and study so that it can apply in the various areas of power system
The document analyzes and compares the C4.5 and K-nearest neighbor (KNN) algorithms for clustering data and determining priority development areas in Papua Province, Indonesia. It first discusses Klassen typology for classifying areas based on economic growth and per capita income. It then provides an overview of the C4.5 and KNN algorithms, including pseudo code for analyzing their time complexities using the Big O notation. The study applies these algorithms to PDRB and economic growth data from 29 regencies in Papua from 2006-2012 to determine priority development areas, running the algorithms multiple times on different sized data sets to analyze running time.
The document describes a recursive algorithm for multi-step prediction with mixture models that have dynamic switching between components. It begins by introducing notations and reviewing individual models, including normal regression components and static/dynamic switching models. It then presents the mixture prediction algorithm, first for a static switching model by constructing a predictive distribution from weighted component predictions. For a dynamic switching model, it similarly takes point estimates from the previous time and substitutes them into components to make weighted averaged predictions over multiple steps. The algorithm is summarized as initializing component statistics and parameter estimates, then substituting previous estimates into components to obtain weighted mixture predictions for new data points.
A Preference Model on Adaptive Affinity PropagationIJECEIAES
In recent years, two new data clustering algorithms have been proposed. One of them is Affinity Propagation (AP). AP is a new data clustering technique that use iterative message passing and consider all data points as potential exemplars. Two important inputs of AP are a similarity matrix (SM) of the data and the parameter ”preference” p. Although the original AP algorithm has shown much success in data clustering, it still suffer from one limitation: it is not easy to determine the value of the parameter ”preference” p which can result an optimal clustering solution. To resolve this limitation, we propose a new model of the parameter ”preference” p, i.e. it is modeled based on the similarity distribution. Having the SM and p, Modified Adaptive AP (MAAP) procedure is running. MAAP procedure means that we omit the adaptive p-scanning algorithm as in original Adaptive-AP (AAP) procedure. Experimental results on random non-partition and partition data sets show that (i) the proposed algorithm, MAAP-DDP, is slower than original AP for random non-partition dataset, (ii) for random 4-partition dataset and real datasets the proposed algorithm has succeeded to identify clusters according to the number of dataset’s true labels with the execution times that are comparable with those original AP. Beside that the MAAP-DDP algorithm demonstrates more feasible and effective than original AAP procedure.
Proposing a New Job Scheduling Algorithm in Grid Environment Using a Combinat...Editor IJCATR
Grid computing is a hardware and software infrastructure and provides affordable, sustainable, and reliable access. Its aim is
to create a supercomputer using free resources. One of the challenges to the Grid computing is scheduling problem which is regarded
as a tough issue. Since scheduling problem is a non-deterministic issue in the Grid, deterministic algorithms cannot be used to improve
scheduling. In this paper, a combination of imperialist competition algorithm (ICA) and gravitational attraction is used for to address the
problem of independent task scheduling in a grid environment, with the aim of reducing the makespan and energy. Experimental results
compare ICA with other algorithms and illustrate that ICA finds a shorter makespan and energy relative to the others. Moreover, it
converges quickly, finding its optimum solution in less time than the other algorithms.
Reconstruction of Time Series using Optimal Ordering of ICA Componentsrahulmonikasharma
This document summarizes research investigating the application of independent component analysis (ICA) and optimal ordering of independent components to reconstruct time series generated by mixed independent sources. A modified fast neural learning ICA algorithm is used to obtain independent components from observed time series. Different error measures and algorithms for determining the optimal ordering of independent components for reconstruction are compared, including exhaustive search, using the L-infinity norm of components, and excluding the least contributing component first. Experimental results on artificial and currency exchange rate time series support using the Euclidean error measure and minimizing the error profile area to obtain the optimal ordering. Reconstructions with only the first few dominant independent components are often acceptable.
Solar PV parameter estimation using multi-objective optimisationjournalBEEI
The estimation of the electrical model parameters of solar PV, such as light-induced current, diode dark saturation current, thermal voltage, series resistance, and shunt resistance, is indispensable to predict the actual electrical performance of solar photovoltaic (PV) under changing environmental conditions. Therefore, this paper first considers the various methods of parameter estimation of solar PV to highlight their shortfalls. Thereafter, a new parameter estimation method, based on multi-objective optimisation, namely, Non-dominated Sorting Genetic Algorithm-II (NSGA-II), is proposed. Furthermore, to check the effectiveness and accuracy of the proposed method, conventional methods, such as, ‘Newton-Raphson’, ‘Particle Swarm Optimisation, Search Algorithm, was tested on four solar PV modules of polycrystalline and monocrystalline materials. Finally, a solar PV module photowatt PWP201 has been considered and compared with six different state of art methods. The estimated performance indices such as current absolute error matrics, absolute relative power error, mean absolute error, and P-V characteristics curve were compared. The results depict the close proximity of the characteristic curve obtained with the proposed NSGA-II method to the curve obtained by the manufacturer’s datasheet.
Clustering using kernel entropy principal component analysis and variable ker...IJECEIAES
Clustering as unsupervised learning method is the mission of dividing data objects into clusters with common characteristics. In the present paper, we introduce an enhanced technique of the existing EPCA data transformation method. Incorporating the kernel function into the EPCA, the input space can be mapped implicitly into a high-dimensional of feature space. Then, the Shannon’s entropy estimated via the inertia provided by the contribution of every mapped object in data is the key measure to determine the optimal extracted features space. Our proposed method performs very well the clustering algorithm of the fast search of clusters’ centers based on the local densities’ computing. Experimental results disclose that the approach is feasible and efficient on the performance query.
This document presents a new method for creating proxy models called Simulated Annealing Programming (SAP). SAP uses simulated annealing optimization on a tree structure to find the equation that best predicts outputs with minimum error. The authors apply SAP to create a model for predicting gas compressor torque based on fuel rate and speed. They find the SAP model is highly accurate, roughly independent of internal parameters, and has smaller overestimation/underestimation compared to other methods like polynomials and neural networks.
This document proposes and evaluates a new metaheuristic optimization algorithm called Current Search (CS) and applies it to optimize PID controller parameters for DC motor speed control. The CS is inspired by electric current flow and aims to balance exploration and exploitation. It outperforms genetic algorithm, particle swarm optimization, and adaptive tabu search on benchmark optimization problems, finding better solutions faster. When applied to optimize a PID controller for DC motor speed control, the CS successfully controlled motor speed.
Sca a sine cosine algorithm for solving optimization problemslaxmanLaxman03209
The document proposes a new population-based optimization algorithm called the Sine Cosine Algorithm (SCA) for solving optimization problems. SCA creates multiple random initial solutions and uses sine and cosine functions to fluctuate the solutions outward or toward the best solution, emphasizing exploration and exploitation. The performance of SCA is evaluated on test functions, qualitative metrics, and by optimizing the cross-section of an aircraft wing, showing it can effectively explore, avoid local optima, converge to the global optimum, and solve real problems with constraints.
A Solution to Optimal Power Flow Problem using Artificial Bee Colony Algorith...IOSR Journals
This document presents an artificial bee colony (ABC) algorithm approach to solve the optimal power flow (OPF) problem incorporating a flexible AC transmission system (FACTS) device, specifically a static synchronous series compensator (SSSC). The ABC algorithm is tested on the IEEE 14-bus test system both with and without the SSSC. Results show that the ABC algorithm gives a better solution when incorporating the SSSC, improving the system performance in terms of lower total cost, lower power losses, and better voltage profile compared to the case without SSSC.
This document proposes a new quantile-based fuzzy time series forecasting model. It begins by discussing how time series models have been used to predict things like enrollments, weather, accidents, and stock prices. It then provides background on quantile regression models and fuzzy time series forecasting. The proposed method develops a time variant quantile-based fuzzy time series forecasting approach based on predicting future data trends. The method converts statistical quantiles to fuzzy quantiles using membership functions and provides a fuzzy metric to calculate future values based on trend forecasts. The model is applied to TAIFEX forecasting and shows better performance than other models in terms of complexity and accuracy.
BINARY SINE COSINE ALGORITHMS FOR FEATURE SELECTION FROM MEDICAL DATAacijjournal
A well-constructed classification model highly depends on input feature subsets from a dataset, which may contain redundant, irrelevant, or noisy features. This challenge can be worse while dealing with medical datasets. The main aim of feature selection as a pre-processing task is to eliminate these features and select the most effective ones. In the literature, metaheuristic algorithms show a successful performance to find optimal feature subsets. In this paper, two binary metaheuristic algorithms named S-shaped binary Sine Cosine Algorithm (SBSCA) and V-shaped binary Sine Cosine Algorithm (VBSCA) are proposed for feature selection from the medical data. In these algorithms, the search space remains continuous, while a binary position vector is generated by two transfer functions S-shaped and V-shaped for each solution. The proposed algorithms are compared with four latest binary optimization algorithms over five medical datasets from the UCI repository. The experimental results confirm that using both bSCA variants enhance the accuracy of classification on these medical datasets compared to four other algorithms.
Cost Aware Expansion Planning with Renewable DGs using Particle Swarm Optimiz...IJERA Editor
This Paper is an attempt to develop the expansion-planning algorithm using meta heuristics algorithms. Expansion Planning is always needed as the power demand is increasing every now and then. Thus for a better expansion planning the meta heuristic methods are needed. The cost efficient Expansion planning is desired in the proposed work. Recently distributed generation is widely researched to implement in future energy needs as it is pollution free and capability of installing it in rural places. In this paper, optimal distributed generation expansion planning with Particle Swarm Optimization (PSO) and Cuckoo Search Algorithm (CSA) for identifying the location, size and type of distributed generator for future demand is predicted with lowest cost as the constraints. Here the objective function is to minimize the total cost including installation and operating cost of the renewable DGs. MATLAB based `simulation using M-file program is used for the implementation and Indian distribution system is used for testing the results.
This document summarizes the application of computational intelligence techniques like genetic algorithms and particle swarm optimization for solving economic load dispatch problems. It first applies a real-coded genetic algorithm to minimize generation costs for a 6-generator test system with continuous fuel cost equations, showing superiority over quadratic programming. It then uses particle swarm optimization to minimize costs for a 10-generator system with each generator having discontinuous fuel options, showing better results than other published methods. The document provides background on economic load dispatch problems and optimization techniques like quadratic programming, genetic algorithms, and particle swarm optimization.
Forecasting of electric consumption in a semiconductor plant using time serie...Alexander Decker
This document summarizes a study that used time series methods to forecast electricity consumption in a semiconductor plant. The study analyzed 36 months of historical electricity consumption data from 2010-2012 to select the best forecasting model. Single exponential smoothing was found to have the lowest Mean Absolute Percentage Error (MAPE) of 5.60% and was determined to be the best forecasting method. The selected model will be used to forecast future electricity consumption for the plant.
Short-Term Forecasting of Electricity Consumption in Palestine Using Artifici...ijaia
Nowadays, planning the process of electricity consumption demand is one of the keys success factors for
the development of countries. Due to the importance of electricity, countries have greatly paid attention to
the prediction of electricity consumption. Electricity consumption prediction is a major problem for the
power sector; an efficient prediction will help electrical companies to take the right decisions and to
optimize their supply strategies for their work. In this paper, we proposed a model that is used to predict
the future electricity consumption depending on the previous consumption. This model provides companies
and authorities to know the future information about the electricity consumption, so they can organize their
distribution and make suitable plans to maintain the stability in the delivery and distribution of electricity.
We aim to create a model that will be able to study the previous electricity consumption patterns and use
this data to predict the future electricity consumption. The system analyzes the collected data of electricity
consumption of the previous years, then byusing the mean value for each day and the use of Multilayer
Feed-Forward with Backpropagation Neural Networks (MFFNNBP) as a tool to predict the future
electricity consumption in Palestine. The data used in this paper depends on data collection of months and
years. Finally, this proposed model conducts a systematic process with the aim of determining the future
electricity consumption in Palestine. The proposed application and the result in this paper are developed in
order to contribute to the improvement of the current energy planning tools in Palestine. The experimental
results show that the model performs good results of prediction, with low Mean Square Error (MSE).
The document analyzes crop yield data from spatial locations in Guntur District, Andhra Pradesh, India using hybrid data mining techniques. It first applies k-means clustering to the dataset, producing 5 clusters. It then applies the J48 classification algorithm to the clustered data, resulting in a decision tree that predicts cluster membership based on attributes like crop type, irrigated area, and latitude. Analysis found irrigated areas of cotton and chilies increased from 2007-2008 to 2011-2012. Association rule mining on the clustered data also found relationships between productivity and location attributes. The hybrid approach of clustering followed by classification effectively analyzed the spatial agricultural data.
The document is a final report for an optimal system operation project. It was authored by Oswaldo Guerra Gomez, a student at Saxion University of Applied Sciences in the Netherlands under the supervision of Mr. Nguyen Trung Thang and Mr. Jan Bollen. The report details the student's research and implementation of the Flower Pollination Algorithm (FPA) to solve the economic load dispatch (ELD) problem of minimizing power generation costs while meeting demand. The student analyzed various optimization algorithms, programmed FPA in MATLAB, and tested it on standard functions and a 6-unit power system, finding it outperformed other algorithms in accuracy and speed. The report also includes a documentary about Vietnam made during the
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Use of Evolutionary Polynomial Regression (EPR) for Prediction of Total Sedim...CSCJournals
This study presents the use of Evolutionary Polynomial Regression (EPR) in predicting the total sediment load of ten selected rivers in Malaysia. EPR is a data-driven hybrid technique, based on evolutionary computing. In order to apply the method, the extensive database of the Department of Irrigation and Drainage (DID), Ministry of Natural Resources & Environment, Malaysia was sought, and unrestricted access was granted. The EPR technique produced greatly improved results compared to other previous sediment load methods. A robustness study was performed in order to confirm the generalisation ability of the developed EPR model, and a sensitivity analysis was also conducted to determine the relative importance of model inputs. The performance of the EPR model demonstrates its predictive capability and generalisation ability to solve highly nonlinear problems of river engineering applications, such as sediment.
Comparative study of methods for optimal reactive power dispatchelelijjournal
Reactive power dispatch plays a main role in order to provide good facility secure and economic operation
in the power system. In a power system optimal reactive power dispatch is supported to improve the voltage
profile, to reduce losses, to improve voltage stability, to reduce cost etc. This paper presents a brief literature survey of reactive power dispatch and also discusses a comparative study of conventional and evolutionary computation techniques applied for reactive power dispatch. The paper is useful for researchers for further research and study so that it can apply in the various areas of power system
The document analyzes and compares the C4.5 and K-nearest neighbor (KNN) algorithms for clustering data and determining priority development areas in Papua Province, Indonesia. It first discusses Klassen typology for classifying areas based on economic growth and per capita income. It then provides an overview of the C4.5 and KNN algorithms, including pseudo code for analyzing their time complexities using the Big O notation. The study applies these algorithms to PDRB and economic growth data from 29 regencies in Papua from 2006-2012 to determine priority development areas, running the algorithms multiple times on different sized data sets to analyze running time.
The document describes a recursive algorithm for multi-step prediction with mixture models that have dynamic switching between components. It begins by introducing notations and reviewing individual models, including normal regression components and static/dynamic switching models. It then presents the mixture prediction algorithm, first for a static switching model by constructing a predictive distribution from weighted component predictions. For a dynamic switching model, it similarly takes point estimates from the previous time and substitutes them into components to make weighted averaged predictions over multiple steps. The algorithm is summarized as initializing component statistics and parameter estimates, then substituting previous estimates into components to obtain weighted mixture predictions for new data points.
A Preference Model on Adaptive Affinity PropagationIJECEIAES
In recent years, two new data clustering algorithms have been proposed. One of them is Affinity Propagation (AP). AP is a new data clustering technique that use iterative message passing and consider all data points as potential exemplars. Two important inputs of AP are a similarity matrix (SM) of the data and the parameter ”preference” p. Although the original AP algorithm has shown much success in data clustering, it still suffer from one limitation: it is not easy to determine the value of the parameter ”preference” p which can result an optimal clustering solution. To resolve this limitation, we propose a new model of the parameter ”preference” p, i.e. it is modeled based on the similarity distribution. Having the SM and p, Modified Adaptive AP (MAAP) procedure is running. MAAP procedure means that we omit the adaptive p-scanning algorithm as in original Adaptive-AP (AAP) procedure. Experimental results on random non-partition and partition data sets show that (i) the proposed algorithm, MAAP-DDP, is slower than original AP for random non-partition dataset, (ii) for random 4-partition dataset and real datasets the proposed algorithm has succeeded to identify clusters according to the number of dataset’s true labels with the execution times that are comparable with those original AP. Beside that the MAAP-DDP algorithm demonstrates more feasible and effective than original AAP procedure.
Proposing a New Job Scheduling Algorithm in Grid Environment Using a Combinat...Editor IJCATR
Grid computing is a hardware and software infrastructure and provides affordable, sustainable, and reliable access. Its aim is
to create a supercomputer using free resources. One of the challenges to the Grid computing is scheduling problem which is regarded
as a tough issue. Since scheduling problem is a non-deterministic issue in the Grid, deterministic algorithms cannot be used to improve
scheduling. In this paper, a combination of imperialist competition algorithm (ICA) and gravitational attraction is used for to address the
problem of independent task scheduling in a grid environment, with the aim of reducing the makespan and energy. Experimental results
compare ICA with other algorithms and illustrate that ICA finds a shorter makespan and energy relative to the others. Moreover, it
converges quickly, finding its optimum solution in less time than the other algorithms.
Reconstruction of Time Series using Optimal Ordering of ICA Componentsrahulmonikasharma
This document summarizes research investigating the application of independent component analysis (ICA) and optimal ordering of independent components to reconstruct time series generated by mixed independent sources. A modified fast neural learning ICA algorithm is used to obtain independent components from observed time series. Different error measures and algorithms for determining the optimal ordering of independent components for reconstruction are compared, including exhaustive search, using the L-infinity norm of components, and excluding the least contributing component first. Experimental results on artificial and currency exchange rate time series support using the Euclidean error measure and minimizing the error profile area to obtain the optimal ordering. Reconstructions with only the first few dominant independent components are often acceptable.
Solar PV parameter estimation using multi-objective optimisationjournalBEEI
The estimation of the electrical model parameters of solar PV, such as light-induced current, diode dark saturation current, thermal voltage, series resistance, and shunt resistance, is indispensable to predict the actual electrical performance of solar photovoltaic (PV) under changing environmental conditions. Therefore, this paper first considers the various methods of parameter estimation of solar PV to highlight their shortfalls. Thereafter, a new parameter estimation method, based on multi-objective optimisation, namely, Non-dominated Sorting Genetic Algorithm-II (NSGA-II), is proposed. Furthermore, to check the effectiveness and accuracy of the proposed method, conventional methods, such as, ‘Newton-Raphson’, ‘Particle Swarm Optimisation, Search Algorithm, was tested on four solar PV modules of polycrystalline and monocrystalline materials. Finally, a solar PV module photowatt PWP201 has been considered and compared with six different state of art methods. The estimated performance indices such as current absolute error matrics, absolute relative power error, mean absolute error, and P-V characteristics curve were compared. The results depict the close proximity of the characteristic curve obtained with the proposed NSGA-II method to the curve obtained by the manufacturer’s datasheet.
Clustering using kernel entropy principal component analysis and variable ker...IJECEIAES
Clustering as unsupervised learning method is the mission of dividing data objects into clusters with common characteristics. In the present paper, we introduce an enhanced technique of the existing EPCA data transformation method. Incorporating the kernel function into the EPCA, the input space can be mapped implicitly into a high-dimensional of feature space. Then, the Shannon’s entropy estimated via the inertia provided by the contribution of every mapped object in data is the key measure to determine the optimal extracted features space. Our proposed method performs very well the clustering algorithm of the fast search of clusters’ centers based on the local densities’ computing. Experimental results disclose that the approach is feasible and efficient on the performance query.
This document presents a new method for creating proxy models called Simulated Annealing Programming (SAP). SAP uses simulated annealing optimization on a tree structure to find the equation that best predicts outputs with minimum error. The authors apply SAP to create a model for predicting gas compressor torque based on fuel rate and speed. They find the SAP model is highly accurate, roughly independent of internal parameters, and has smaller overestimation/underestimation compared to other methods like polynomials and neural networks.
This document proposes and evaluates a new metaheuristic optimization algorithm called Current Search (CS) and applies it to optimize PID controller parameters for DC motor speed control. The CS is inspired by electric current flow and aims to balance exploration and exploitation. It outperforms genetic algorithm, particle swarm optimization, and adaptive tabu search on benchmark optimization problems, finding better solutions faster. When applied to optimize a PID controller for DC motor speed control, the CS successfully controlled motor speed.
Sca a sine cosine algorithm for solving optimization problemslaxmanLaxman03209
The document proposes a new population-based optimization algorithm called the Sine Cosine Algorithm (SCA) for solving optimization problems. SCA creates multiple random initial solutions and uses sine and cosine functions to fluctuate the solutions outward or toward the best solution, emphasizing exploration and exploitation. The performance of SCA is evaluated on test functions, qualitative metrics, and by optimizing the cross-section of an aircraft wing, showing it can effectively explore, avoid local optima, converge to the global optimum, and solve real problems with constraints.
A Solution to Optimal Power Flow Problem using Artificial Bee Colony Algorith...IOSR Journals
This document presents an artificial bee colony (ABC) algorithm approach to solve the optimal power flow (OPF) problem incorporating a flexible AC transmission system (FACTS) device, specifically a static synchronous series compensator (SSSC). The ABC algorithm is tested on the IEEE 14-bus test system both with and without the SSSC. Results show that the ABC algorithm gives a better solution when incorporating the SSSC, improving the system performance in terms of lower total cost, lower power losses, and better voltage profile compared to the case without SSSC.
This document proposes a new quantile-based fuzzy time series forecasting model. It begins by discussing how time series models have been used to predict things like enrollments, weather, accidents, and stock prices. It then provides background on quantile regression models and fuzzy time series forecasting. The proposed method develops a time variant quantile-based fuzzy time series forecasting approach based on predicting future data trends. The method converts statistical quantiles to fuzzy quantiles using membership functions and provides a fuzzy metric to calculate future values based on trend forecasts. The model is applied to TAIFEX forecasting and shows better performance than other models in terms of complexity and accuracy.
BINARY SINE COSINE ALGORITHMS FOR FEATURE SELECTION FROM MEDICAL DATAacijjournal
A well-constructed classification model highly depends on input feature subsets from a dataset, which may contain redundant, irrelevant, or noisy features. This challenge can be worse while dealing with medical datasets. The main aim of feature selection as a pre-processing task is to eliminate these features and select the most effective ones. In the literature, metaheuristic algorithms show a successful performance to find optimal feature subsets. In this paper, two binary metaheuristic algorithms named S-shaped binary Sine Cosine Algorithm (SBSCA) and V-shaped binary Sine Cosine Algorithm (VBSCA) are proposed for feature selection from the medical data. In these algorithms, the search space remains continuous, while a binary position vector is generated by two transfer functions S-shaped and V-shaped for each solution. The proposed algorithms are compared with four latest binary optimization algorithms over five medical datasets from the UCI repository. The experimental results confirm that using both bSCA variants enhance the accuracy of classification on these medical datasets compared to four other algorithms.
Cost Aware Expansion Planning with Renewable DGs using Particle Swarm Optimiz...IJERA Editor
This Paper is an attempt to develop the expansion-planning algorithm using meta heuristics algorithms. Expansion Planning is always needed as the power demand is increasing every now and then. Thus for a better expansion planning the meta heuristic methods are needed. The cost efficient Expansion planning is desired in the proposed work. Recently distributed generation is widely researched to implement in future energy needs as it is pollution free and capability of installing it in rural places. In this paper, optimal distributed generation expansion planning with Particle Swarm Optimization (PSO) and Cuckoo Search Algorithm (CSA) for identifying the location, size and type of distributed generator for future demand is predicted with lowest cost as the constraints. Here the objective function is to minimize the total cost including installation and operating cost of the renewable DGs. MATLAB based `simulation using M-file program is used for the implementation and Indian distribution system is used for testing the results.
This document summarizes the application of computational intelligence techniques like genetic algorithms and particle swarm optimization for solving economic load dispatch problems. It first applies a real-coded genetic algorithm to minimize generation costs for a 6-generator test system with continuous fuel cost equations, showing superiority over quadratic programming. It then uses particle swarm optimization to minimize costs for a 10-generator system with each generator having discontinuous fuel options, showing better results than other published methods. The document provides background on economic load dispatch problems and optimization techniques like quadratic programming, genetic algorithms, and particle swarm optimization.
Forecasting of electric consumption in a semiconductor plant using time serie...Alexander Decker
This document summarizes a study that used time series methods to forecast electricity consumption in a semiconductor plant. The study analyzed 36 months of historical electricity consumption data from 2010-2012 to select the best forecasting model. Single exponential smoothing was found to have the lowest Mean Absolute Percentage Error (MAPE) of 5.60% and was determined to be the best forecasting method. The selected model will be used to forecast future electricity consumption for the plant.
Short-Term Forecasting of Electricity Consumption in Palestine Using Artifici...ijaia
Nowadays, planning the process of electricity consumption demand is one of the keys success factors for
the development of countries. Due to the importance of electricity, countries have greatly paid attention to
the prediction of electricity consumption. Electricity consumption prediction is a major problem for the
power sector; an efficient prediction will help electrical companies to take the right decisions and to
optimize their supply strategies for their work. In this paper, we proposed a model that is used to predict
the future electricity consumption depending on the previous consumption. This model provides companies
and authorities to know the future information about the electricity consumption, so they can organize their
distribution and make suitable plans to maintain the stability in the delivery and distribution of electricity.
We aim to create a model that will be able to study the previous electricity consumption patterns and use
this data to predict the future electricity consumption. The system analyzes the collected data of electricity
consumption of the previous years, then byusing the mean value for each day and the use of Multilayer
Feed-Forward with Backpropagation Neural Networks (MFFNNBP) as a tool to predict the future
electricity consumption in Palestine. The data used in this paper depends on data collection of months and
years. Finally, this proposed model conducts a systematic process with the aim of determining the future
electricity consumption in Palestine. The proposed application and the result in this paper are developed in
order to contribute to the improvement of the current energy planning tools in Palestine. The experimental
results show that the model performs good results of prediction, with low Mean Square Error (MSE).
Forecasting electricity usage in industrial applications with gpu acceleratio...Conference Papers
This document compares various exponential smoothing and ARIMA models to forecast electricity usage in an industrial setting using a short time series dataset. It finds that Holt linear trend and Holt linear damped trend models provide the most accurate forecasts for electricity usage in mill production of hammers and pellets based on having the lowest root mean square error values compared to actual usage. GPU acceleration via the RAPIDS framework is used to improve the training and forecasting speed of the models on the short dataset.
This document discusses comparing the accuracy of artificial neural networks (ANN) and multiple linear regression (MLR) models for predicting shear stress in an Eicher 11.10 truck chassis frame. It first reviews past research comparing ANN and MLR models in other applications. It then provides theoretical background on MLR and ANN models. The document describes developing both MLR and ANN models to predict shear stress based on chassis frame design parameters obtained from finite element analysis. The results of the MLR model are shown, and the document indicates that the ANN model will be compared to the MLR model to determine which provides a more accurate prediction of shear stress for the truck chassis frame.
Prediction of Extreme Wind Speed Using Artificial Neural Network ApproachScientific Review SR
Prediction of an accurate wind speed of wind farms is necessary because of the intermittent nature
of wind for any region. Number of methods such as persistence, physical, statistical, spatial correlation, artificial
intelligence network and hybrid are generally available for prediction of wind speed. In this paper, ANN based
methods viz., Multi Layer Perceptron (MLP) and Radial Basis Function (RBF) neural networks are used. The
performance of the networks applied for prediction of wind speed is evaluated by model performance indicators
viz., Correlation Coefficient (CC), Model Efficiency (MEF) and Mean Absolute Percentage Error (MAPE).
Meteorological parameters such as maximum and minimum temperature, air pressure, solar radiation and
altitude are considered as input units for MLP and RBF networks to predict the extreme wind speed at Delhi.
The study shows the values of CC, MEF and MAPE between the observed and predicted wind speed (using
MLP) are computed as 0.992, 95.4% and 4.3% respectively while training the network data. For RBF network,
the values of CC, MEF and MAPE are computed as 0.992, 95.9% and 3.0% respectively. The model
performance analysis indicates the RBF is better suited network among two different networks studied for
prediction of extreme wind speed at Delhi.
Electricity load forecasting by artificial neural network modelIAEME Publication
This document summarizes a research paper on electricity load forecasting using an artificial neural network model with weather data. The paper proposes a short-term load forecasting model that uses both historical electricity load and temperature data as inputs to neural networks. Non-decimated wavelet transforms are used to pre-process the data and improve forecasting accuracy. The model was tested on load data from India and achieved a mean absolute percentage error of 1.24%, demonstrating high forecasting accuracy. The paper also reviews other load forecasting methods and their errors, finding neural networks to be the most accurate approach.
A Comparative study on Different ANN Techniques in Wind Speed Forecasting for...IOSRJEEE
There are several available renewable sources of energy, among which Wind Power is the one which is most uncertain in nature. This is because wind speed changes continuously with time leading to uncertainty in availability of amount of wind power generated. Hence, a short-term forecasting of wind speed will help in prior estimation of wind power generation availability for the grid and economic load dispatch.This paper present a comparative study of a Wind speed forecasting model using Artificial Neural Networks (ANN) with three different learning algorithms. ANN is used because it is a non-linear data driven, adaptive and very powerful tool for forecasting purposes. Here an attempt is made to forecast Wind Speed using ANN with Levenberg-Marquard (LM) algorithm, Scaled Conjugate Gradient (SCG) algorithm and Bayesian Regularization (BR) algorithm and their results are compared based on their convergence speed in training period and their performance in testing period on the basis of Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE) and Mean Square Error (MSE).A 48 hour ahead wind speed is forecasted in this work and it is compared with the measured values using all three algorithms and the best out of the three is selected based on minimum error.
System for Prediction of Non Stationary Time Series based on the Wavelet Radi...IJECEIAES
This document proposes and examines the performance of a hybrid time series forecasting model called the wavelet radial bases function neural network (WRBFNN) model. The WRBFNN model uses wavelet transforms to extract coefficients from time series data as inputs to a radial basis function neural network for forecasting. The performance of the WRBFNN model is compared to a wavelet feed forward neural network (WFFNN) model using four types of non-stationary time series data. Results show the WRBFNN model achieves more accurate forecasts than the WFFNN model for certain types of non-stationary data, and trains significantly faster with fewer computational resources required.
Forecasting Electric Energy Demand using a predictor model based on Liquid St...Waqas Tariq
Electricity demand forecasts are required by companies who need to predict their customers’ demand, and by those wishing to trade electricity as a commodity on financial markets. It is hard to find the right prediction method for a given application if not a prediction expert. Recent works show that Liquid State Machines (LSM’s) can be applied to the prediction of time series. The main advantage of the LSM is that it projects the input data in a high-dimensional dynamical space and therefore simple learning methods can be used to train the readout. In this paper we present an experimental investigation of an approach for the computation of time series prediction by employing Liquid State Machines (LSM) in the modeling of a predictor in a case study for short-term and long-term electricity demand forecasting. Results of this investigation are promising, considering the error to stop training the readout, the number of iterations of training of the readout and that no strategy of seasonal adjustment or preprocessing of data was achieved to extract non-correlated data out of the time series.
Forecasting Electric Energy Demand using a predictor model based on Liquid St...Waqas Tariq
Electricity demand forecasts are required by companies who need to predict their customers’ demand, and by those wishing to trade electricity as a commodity on financial markets. It is hard to find the right prediction method for a given application if not a prediction expert. Recent works show that Liquid State Machines (LSM’s) can be applied to the prediction of time series. The main advantage of the LSM is that it projects the input data in a high-dimensional dynamical space and therefore simple learning methods can be used to train the readout. In this paper we present an experimental investigation of an approach for the computation of time series prediction by employing Liquid State Machines (LSM) in the modeling of a predictor in a case study for short-term and long-term electricity demand forecasting. Results of this investigation are promising, considering the error to stop training the readout, the number of iterations of training of the readout and that no strategy of seasonal adjustment or preprocessing of data was achieved to extract non-correlated data out of the time series.
Multi-task learning using non-linear autoregressive models and recurrent neur...IJECEIAES
Tide level forecasting plays an important role in environmental management and development. Current tide level forecasting methods are usually implemented for solving single task problems, that is, a model built based on the tide level data at an individual location is only used to forecast tide level of the same location but is not used for tide forecasting at another location. This study proposes a new method for tide level prediction at multiple locations simultaneously. The method combines nonlinear autoregressive moving average with exogenous inputs (NARMAX) model and recurrent neural networks (RNNs), and incorporates them into a multi-task learning (MTL) framework. Experiments are designed and performed to compare single task learning (STL) and MTL with and without using non-linear autoregressive models. Three different RNN variants, namely, long short- term memory (LSTM), gated recurrent unit (GRU) and bidirectional LSTM (BiLSTM) are employed together with non-linear autoregressive models. A case study on tide level forecasting at many different geographical locations (5 to 11 locations) is conducted. Experimental results demonstrate that the proposed architectures outperform the classical single-task prediction methods.
Short-term load forecasting with using multiple linear regression IJECEIAES
This document discusses short-term load forecasting using multiple linear regression. It summarizes the research method used, which involves developing a multiple linear regression model to predict electrical load based on variables like temperature, humidity, day of week, and previous load data. The model is trained on historical load and weather data from New York City over 9 years. Testing shows the model can predict load a day ahead with 5.15% mean absolute percentage error. Regression coefficients, t-statistics, and p-values indicate the trained model explains about 90% of the variation in load and the predictors are statistically significant. An example day-ahead hourly load forecast is provided for June 25, 2019.
Short term load forecasting using hybrid neuro wavelet modelIAEME Publication
The document presents a hybrid neuro-wavelet model for short term load forecasting. The model uses a cascaded feedforward backpropagation neural network approach. It first applies discrete wavelet transform to decompose historical electricity load data into wavelet coefficients. These coefficients are then used as input to train multiple neural networks. The neural network outputs are recombined using wavelet reconstruction to produce the final forecast. The model is tested on half-hourly electricity load data from Queensland. It achieves a mean square error of 2.07x10-3, indicating an accurate forecast.
Modeling of off grid hybrid power system under jordanian climateAlexander Decker
This document summarizes a study that models an off-grid hybrid power system for a village in Jordan using Matlab Simulink. The hybrid system combines photovoltaic (PV) panels and wind turbines. The study models the PV system, wind turbines, batteries, and load to simulate the hybrid system's output under different conditions. It is found that Matlab Simulink with a graphical user interface can effectively plan and analyze hybrid power systems for off-grid villages.
Efficiency of recurrent neural networks for seasonal trended time series mode...IJECEIAES
Seasonal time series with trends are the most common data sets used in forecasting. This work focuses on the automatic processing of a
non-pre-processed time series by studying the efficiency of recurrent neural networks (RNN), in particular both long short-term memory (LSTM), and bidirectional long short-term memory (Bi-LSTM) extensions, for modelling seasonal time series with trend. For this purpose, we are interested in the learning stability of the established systems using the mean average percentage error (MAPE) as a measure. Both simulated and real data were examined, and we have found a positive correlation between the signal period and the system input vector length for stable and relatively efficient learning. We also examined the white noise impact on the learning performance.
The document discusses using optimized neural networks for short-term wind speed forecasting. It proposes using parametric recurrent neural networks (PRNNs) with an improved activation function that includes a logarithmic parameter "p" to optimize the network size. The PRNNs are trained to predict wind speed using historical wind farm data. Simulation results show the PRNNs more accurately predict wind speed up to 180 minutes in the future compared to numerical methods using polynomials. The value of the "p" parameter can identify linearly dependent neurons that can be combined to reduce the optimized network size.
The document summarizes research comparing the Particle Swarm Optimization (PSO) and Differential Evolution (DE) algorithms for optimizing power consumption using smart energy meter data. Both algorithms were implemented in MATLAB and tested on 15 days of meter data from a university lab in India. PSO achieved an 11.5% reduction in power consumption while DE achieved a 9.4% reduction. PSO outperformed DE for this application, showing it is an effective technique for optimizing energy use and reducing electricity costs for consumers. Future work could integrate the models with real smart meters and controllers to achieve automated scheduling and greater savings.
Neural wavelet based hybrid model for short-term load forecastingAlexander Decker
This document summarizes a research paper that proposes a neural-wavelet based hybrid model for short-term load forecasting. The paper introduces neural networks and how they can be used for electric load forecasting. It then proposes a model that uses wavelet transforms for preprocessing the original load signal data into different levels, before inputting these into a neural network for short-term load forecasting. The model is tested and results show the neural-wavelet model provides more accurate forecasts than an artificial neural network alone.
Effect of Doping of MLGNR on Propagation Delay of a Driver-Interconnect-Load ...IJERA Editor
Nowadays multi layer GNR (MLGNR) is used in nano scale VLSI interconnection. In this research paper study has been done on two variants of MLGNR, one neutral MLGNR (means with no doping) and second intercalated doped MLGNR, in terms of propagation delay. The test circuit which has been used to check the propagation delay is a Driver-interconnect-load system which is using CMOS, short gate (SG) FinFET, independent gate (IG) FinFET, and low power (LP) FinFET as driver and load circuits at 16 nm.
Review of Neural Network in the entired worldAhmedWasiu
This document provides a review of literature on the use of artificial neural network (ANN) models for predicting energy production from renewable sources such as solar PV, hydraulic, and wind energy. ANN models have proven effective at learning complex relationships from data and predicting values in real-time. The review focuses on how ANN models have been applied to issues like reliability assessment of energy assets and incorporating factors like meteorological conditions that impact energy production. Studies that compare ANN to other techniques or identify opportunities for complementary approaches are also discussed.
Similar to FORECASTING ENERGY CONSUMPTION USING FUZZY TRANSFORM AND LOCAL LINEAR NEURO FUZZY MODELS (20)
Introduction- e - waste – definition - sources of e-waste– hazardous substances in e-waste - effects of e-waste on environment and human health- need for e-waste management– e-waste handling rules - waste minimization techniques for managing e-waste – recycling of e-waste - disposal treatment methods of e- waste – mechanism of extraction of precious metal from leaching solution-global Scenario of E-waste – E-waste in India- case studies.
A review on techniques and modelling methodologies used for checking electrom...nooriasukmaningtyas
The proper function of the integrated circuit (IC) in an inhibiting electromagnetic environment has always been a serious concern throughout the decades of revolution in the world of electronics, from disjunct devices to today’s integrated circuit technology, where billions of transistors are combined on a single chip. The automotive industry and smart vehicles in particular, are confronting design issues such as being prone to electromagnetic interference (EMI). Electronic control devices calculate incorrect outputs because of EMI and sensors give misleading values which can prove fatal in case of automotives. In this paper, the authors have non exhaustively tried to review research work concerned with the investigation of EMI in ICs and prediction of this EMI using various modelling methodologies and measurement setups.
Literature Review Basics and Understanding Reference Management.pptxDr Ramhari Poudyal
Three-day training on academic research focuses on analytical tools at United Technical College, supported by the University Grant Commission, Nepal. 24-26 May 2024
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
ACEP Magazine edition 4th launched on 05.06.2024Rahul
This document provides information about the third edition of the magazine "Sthapatya" published by the Association of Civil Engineers (Practicing) Aurangabad. It includes messages from current and past presidents of ACEP, memories and photos from past ACEP events, information on life time achievement awards given by ACEP, and a technical article on concrete maintenance, repairs and strengthening. The document highlights activities of ACEP and provides a technical educational article for members.
CHINA’S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECTjpsjournal1
The rivalry between prominent international actors for dominance over Central Asia's hydrocarbon
reserves and the ancient silk trade route, along with China's diplomatic endeavours in the area, has been
referred to as the "New Great Game." This research centres on the power struggle, considering
geopolitical, geostrategic, and geoeconomic variables. Topics including trade, political hegemony, oil
politics, and conventional and nontraditional security are all explored and explained by the researcher.
Using Mackinder's Heartland, Spykman Rimland, and Hegemonic Stability theories, examines China's role
in Central Asia. This study adheres to the empirical epistemological method and has taken care of
objectivity. This study analyze primary and secondary research documents critically to elaborate role of
china’s geo economic outreach in central Asian countries and its future prospect. China is thriving in trade,
pipeline politics, and winning states, according to this study, thanks to important instruments like the
Shanghai Cooperation Organisation and the Belt and Road Economic Initiative. According to this study,
China is seeing significant success in commerce, pipeline politics, and gaining influence on other
governments. This success may be attributed to the effective utilisation of key tools such as the Shanghai
Cooperation Organisation and the Belt and Road Economic Initiative.
International Conference on NLP, Artificial Intelligence, Machine Learning an...gerogepatton
International Conference on NLP, Artificial Intelligence, Machine Learning and Applications (NLAIM 2024) offers a premier global platform for exchanging insights and findings in the theory, methodology, and applications of NLP, Artificial Intelligence, Machine Learning, and their applications. The conference seeks substantial contributions across all key domains of NLP, Artificial Intelligence, Machine Learning, and their practical applications, aiming to foster both theoretical advancements and real-world implementations. With a focus on facilitating collaboration between researchers and practitioners from academia and industry, the conference serves as a nexus for sharing the latest developments in the field.
International Conference on NLP, Artificial Intelligence, Machine Learning an...
FORECASTING ENERGY CONSUMPTION USING FUZZY TRANSFORM AND LOCAL LINEAR NEURO FUZZY MODELS
1. International Journal on Soft Computing ( IJSC ) Vol.2, No.4, November 2011
DOI : 10.5121/ijsc.2011.2402 11
FORECASTING ENERGY CONSUMPTION USING
FUZZY TRANSFORM AND LOCAL LINEAR NEURO
FUZZY MODELS
Hossein Iranmanesh1
, Majid Abdollahzade2
and Arash Miranian3
1
Department of Industrial Engineering, “University of Tehran” & “Institute for
International Energy Studies”, Tehran, Iran
hiranmanesh@ut.ac.ir
2
Department of Mechanical Engineering, “K.N.Toosi University of Technology”
& “Institute for International Energy Studies”, Tehran, Iran
m.abdollahzade@gmail.com
3
School of Electrical and Computer Engineering, “University of Tehran” & “Institute for
International Energy Studies”, Tehran, Iran
ar.miranian@gmail.com
ABSTRACT
This paper proposes a hybrid approach based on local linear neuro fuzzy (LLNF) model and fuzzy
transform (F-transform), termed FT-LLNF, for prediction of energy consumption. LLNF models are
powerful in modelling and forecasting highly nonlinear and complex time series. Starting from an optimal
linear least square model, they add nonlinear neurons to the initial model as long as the model's accuracy
is improved. Trained by local linear model tree learning (LOLIMOT) algorithm, the LLNF models provide
maximum generalizability as well as the outstanding performance. Besides, the recently introduced
technique of fuzzy transform (F-transform) is employed as a time series pre-processing method. The
technique of F-transform, established based on the concept of fuzzy partitions, eliminates noisy variations
of the original time series and results in a well-behaved series which can be predicted with higher
accuracy. The proposed hybrid method of FT-LLNF is applied to prediction of energy consumption in the
United States and Canada. The prediction results and comparison to optimized multi-layer perceptron
(MLP) models and the LLNF itself, revealed the promising performance of the proposed approach for
energy consumption prediction and its potential usage for real world applications.
KEYWORDS
LLNF, LOLIMOT, F-transform, energy consumption, forecasting
1. INTRODUCTION
The increasing trend in energy consumption world-wide necessitates appropriate investments on
energy sectors by governments. For instance, consumption of petroleum in the Unites States, as
one of the world’s largest energy consumers, has increased by 20.4% from 1990 to 2005. U.S.
natural gas consumption has also experienced 16.32% increase within the same period [1]. Hence,
providing accurate forecasts of energy consumptions is of crucial importance for policy-makers in
large-scale decision makings, such as investment planning for generation and distribution of
energy.
2. International Journal on Soft Computing ( IJSC ) Vol.2, No.4, November 2011
12
Different methods, including time series and computational intelligence (CI) -based approaches
have been proposed for energy consumption prediction, to date. Auto-regressive integrated
moving average (ARIMA) model, as one of the most popular time series methods, has been used
for energy consumption prediction to a great extent. For instance, Haris and Liu proposed
ARIMA and transfer function models for prediction of electricity consumption [2]. ARIMA and
seasonal ARIMA (SARIMA) were used by Ediger and Akar to estimate the future primary energy
consumption of Turkey from 2005 to 2020 [3]. Linear regression models have also been proposed
for energy consumption prediction [4]. Other time series approaches such as Grey models and
Markov models have been developed for forecasting energy consumption [5]-[7].
Numerous researches have been carried out in the past decade on developing CI-based methods
for energy consumption prediction. In contract to time series approaches which are inherently
linear, CI-based methods can effectively model the nonlinear behaviour of time series. Fuzzy
logic and neural networks (NN) are two main CI-based techniques which have found many
applications in function approximation, simulation, modelling and prediction [8], [9]. Energy
consumption modelling and prediction are also among the applications of the CI-based
approaches. Long-term prediction of gasoline demand in several countries by multi-layer
perceptron (MLP) network has been carried out by Azadeh et al. [10]. In another study, Azadeh et
al. proposed a fuzzy regression algorithm for oil demand estimation of the U.S., Canada, Japan
and Australia [11]. A hybrid technique of well-known Takagi-Sugeno fuzzy inference system and
fuzzy regression has been proposed for prediction of short-term electric demand variations by
Shakouri et al. [12]. In their study, Shakouri et al. introduced a type III TSK fuzzy inference
machine combined with a set of linear and nonlinear fuzzy regressors in the consequent part to
model effects of the climate change on the electricity demand. Yokoyama et al. concentrated on
neural networks to predict energy demands [13]. They used an optimization technique, termed
Modal Trimming Method to optimize model's parameters and then forecasted cooling demand in
buildings. Neuro-fuzzy models, which are combination of fuzzy logic and neural networks, have
also been proposed for energy consumption prediction. For instance, Chen proposed a fuzzy-
neural approach for long-term prediction of electric energy in Taiwan [14]. In this study, Chan
proposed multiple experts which construct their own fuzzy back propagation networks from
various viewpoints to forecast the long-term electric energy consumption. Chen used fuzzy
intersection to aggregate these long-term load forecasts and then constructed a radial basis
function network to defuzzify the aggregation result and to generate a representative/crisp values.
As another example, short-term prediction of natural gas demand has been carried out using
Adaptive neuro-fuzzy inference system (ANFIS) in [15]. Azadeh et al. in [15] used ANFIS
approach with pre-processing and post-processing concepts to improve accuracy of predictions.
Nostrati et al. also proposed a neuro-fuzzy model for long-term electrical load forecasting. [16].
Various traditional and CI-based models for energy demand forecasting have been also reviewed
in [17].
In this paper, we employ the technique of fuzzy transform and LLNF model for prediction of
energy consumption. The recently introduced concept of fuzzy transform will be used for data
pre-processing and denoising the consumption series in order to obtain a well-behaved time
series. Then the filtered time series is used by the LLNF model to forecast future values of
consumption. The developed FT-LLNF approach will be applied to prediction of annual oil
consumption in the U.S. and Canada as well as monthly consumption of natural gas in the U.S.
The results of prediction demonstrate the effectiveness of the proposed method for energy
consumption prediction.
3. International Journal on Soft Computing ( IJSC ) Vol.2, No.4, November 2011
13
2. LOCAL LINEAR NEURO FUZZY MODEL
The local linear neuro fuzzy approach based on the incremental tree based learning algorithm
starts from an initial optimal linear model and increases the complexity of the model as long as
improvements occurs. The local linear model tree (LOLIMOT) learning algorithm, employed in
this paper, applies axis-orthogonal splits to divide the original input domain into sub-domains,
Fig. 1. Structure of LLNF
each identified by a local linear model (LLM) and its associated validity function. Thus, it is
inferred that the LLNF models use the divide-and-conquer approach to solve a complex
modelling problem by decomposing it into smaller and simpler sub-problems [18]. The general
structure of the LLNF model for a p-dimensional input space and M local linear models is
illustrated in fig. 1.
The global output of the LLNF model can be stated as follows,
( )φ
=
= ∑
1
ˆ ˆ
M
i i
i
y y u (1)
where, yi and ϕj are the output and normalized validity function of the LLMj, respectively
and = L1 2
T
pu u u u is the input vector.
The local output of each LLM is found form the linear estimation, below
θ= =T
i jy u jˆ , 1,... (2)
where, θj is the vector of parameters for LLMj.
The LLNF model offers a transparent representation of the nonlinear system. In other words, the
LLNF model divides a complex model into a set of sub-models which perform linearly and
independently.
4. International Journal on Soft Computing ( IJSC ) Vol.2, No.4, November 2011
14
Two types of parameters, i.e. rule consequent and rule premise parameters must be identified in
the LLNF model. The rule consequent parameters (θij), associated with local linear models, are
estimated by linear least-square optimization procedure. Both global and local least square
estimations can be applied. In this paper, the former is employed since it considers the interaction
Fig. 2. Operation of LOLIMOT in the first four iterations in a two-dimensional input space
between LLMs to a greater extent and therefore show better performance [18].
For an LLNF model with M neurons and p inputs, the vector of linear parameters contains
( )1n M p= + elements,
θ θ θ θ θ θ θ θ θ θ =
T
p p M M Mp10 11 1 20 21 2 0 1... ... ... (3)
The corresponding regression matrix for N measured data samples is,
= 1 2 ... MX X X X (4)
where regression sub-matrix iX takes the following form,
5. International Journal on Soft Computing ( IJSC ) Vol.2, No.4, November 2011
15
( )( ) ( ) ( )( ) ( ) ( )( )
( )( ) ( ) ( )( ) ( ) ( )( )
( )( ) ( ) ( )( ) ( ) ( )( )
φ φ φ
φ φ φ
φ φ φ
M M M
1
1
1
1 1 1 ... 1 1
2 2 2 ... 2 2
...
i i p i
i i p i
i i p i
u u u u u
u u u u u
u N u N u N u N u N
(5)
Hence,
( )θ θ
−
= = T T
y X X X X y
1
ˆ ˆˆ . ; (6)
where, ( ) ( ) ( ) = ⋅⋅⋅ 1 2
T
y y y y N contains the measured outputs.
The LOLIMOT algorithm is used for estimation of validity functions parameters. This algorithm
is fast in convergence and computationally efficient and therefore preferred over other
optimization methods such as genetic algorithms and simulated annealing. LOLIMOT algorithm
utilizes multivariate normalized axis-orthogonal Gaussian membership functions, as stated below.
( )
( ) ( )
( ) ( )
µ
σ σ
σ σ
−−
= − + +
− −
= − ⋅ ⋅ −
22
1 1
2 2
1
2 2
1 1 1 1
2 2
1 1
1
exp ...
2
1 1
exp ... exp
2 2
p ipi
i
i ip
i i
i i
u cu c
u
u c u c
(7)
( )
( )
( )
µ
φ
µ
=
=
∑
1
i
i M
j
j
u
u
u
(8)
where cij and σij represent centre coordinate and standard deviation of normalized Gaussian
validity function associated with its local linear model.
In LOLIMOT algorithm, the input space is divided into hyper-rectangles by axis-orthogonal cuts
based on a tree structure. Each hyper-rectangle represents an LLM. In the original LOLIMOT
algorithm, the LLM with worst performance is divided into two halves at each iteration. Then
Gaussian membership functions are placed at the centres of the hyper-rectangles and standard
deviations are selected proportional to the extension of hyper-rectangles (usually 1/3 of hyper-
rectangle's extension). Starting from an initial optimal linear model, the LOLIMOT algorithm
adds nonlinear neurons provided that the model's performance is enhanced. Therefore, a model
with the highest generalization is achieved.
A graphical representation of partitioning a two-dimensional input space by LOLMOT up to the
first four iterations is illustrated by fig. 2.
3. FUZZY TRANSFORM
The new technique of fuzzy transform (F-transform) has been developed in recent years for
function approximation applications [19]. However, image compression, denoising and time
series analysis are other applications of the F-transform [20]-[21]. In this paper we employ F-
transform due to its denoising properties for pre-processing of the input data in prediction of
energy consumption time series. In this section, the concept of fuzzy partition in [a,b] is defined
and then the direct and inverse F-transform are introduced. It must be noted that here in this we
6. International Journal on Soft Computing ( IJSC ) Vol.2, No.4, November 2011
16
focus only on discrete F-transform and the term discrete is omitted through the rest of the paper
for convenience.
Definition 1 [19]: Let x1< ··· <xn be fixed nodes within [a, b], such that x1 = a, xn = b and n ≥ 2.
We say that fuzzy sets A1,... , An identified with their membership functions A1(x),... ,An(x) defined
on [a, b], form a fuzzy partition of [ ],a b if they fulfill the following conditions for k = 1,... , n:
(1) Ak :[a, b]→[0, 1], Ak(xk) = 1;
(2) Ak(x) = 0 if x∉(xk−1,xk+1) where for the uniformity of denotation, we put x0 = a and xn+1 = b;
(3) Ak(x) is continuous;
(4) Ak(x), k = 2,..., n, strictly increases on [xk−1,xk] and Ak(x), k = 1,... ,n-1, strictly decreases on
[xk,xk+1];
(5) for all x ∈ [a, b], ( )
1
1
n
k
k
A x
=
=∑
The membership functions A1, …, An are called basic functions.
It must be noted that the basic functions are determined by the set of nodes 1 < <L nx x and
properties 1-5, and their shape is not pre-specified. Therefore, the shape of basic functions can be
determined according to additional requirements. Fig. 3 show an example of fuzzy partitions and
by triangular membership functions in [1, 4]. The following formulae describe the formal
representation of such triangular membership functions in [x1, xn],
( )
( )
[ ]
( )
( )
[ ]
( )
[ ]
( )
( )
[ ]
1
1 2
1 1
1
1
1
1
1
1
1
0
1
0
0
k
k k
k
k
k k k
k
n
n n
n n
x x
, x x ,x
A x h
otherwise
x x
, x x ,x
h
x x
A x , x x ,x
h
otherwise
x x
, x x ,x
A x h
otherwise
−
−
+
−
−
−
−
− ∈
=
−
∈
−
= − ∈
−
∈
=
where, k = 1, …, n-1 and hk = xk+1 - xk.
Fig. 3. A fuzzy partition with triangular membership functions
7. International Journal on Soft Computing ( IJSC ) Vol.2, No.4, November 2011
17
Fig. 4. Proposed forecast framework. Prediction of time series at period t, using a window of
length W.
3.1 Discrete F-transform
The mathematical description of discrete F-transform and its inverse is presented in this section.
The F-transform transforms an original function into an n-dimensional vector. The inverse F-
transform constructs an approximation of the original function using the transformed n-
dimensional vector. In fact, the F-transform presents a unique and simple representation of the
original function which can be employed in complex computations [20]. The following definition
introduces the F-transform.
Definition 2 [19]: Let 1 nA ,...,A be basic functions which form a fuzzy partition of [a, b] and f be a
function given at nodes [ ]1 lp ,..., p a,b∈ . We say that the n-tuple of real numbers [ ]1 nF ,..., F is the
F-transform of function f given by,
( ) ( )
( )
1
1
1
l
j k jj
k l
k jj
f p A p
F , k ,...,n
A p
=
=
= =
∑
∑
(9)
It's been shown in [19] that the components of F-transform are the weighted mean values of the
original function and the weights are given by the basic functions.
Definition 3 [19]. Let 1 nA ,...,A be basic functions which form a fuzzy partition of [a, b] and f be a
function given at nodes [ ]1 lp ,..., p a,b∈ . Let ( ) [ ]1 nf F ,..., F=nF be the F-transform of functionf
with respect to 1 nA ,...,A . Then the function
( ) ( )
1
1
n
F ,n j k k j
k
f p F A p , j ,...,l
=
= =∑ (10)
is called the inverse F-transform of function f.
8. International Journal on Soft Computing ( IJSC ) Vol.2, No.4, November 2011
18
4. PROPOSED FORECAST FRAMEWORK
In this section, the procedure of time series denoising by means of F-transform is illustrated.
Based on the proposed approach, the energy consumption series before being fed into the hybrid
forecaster is processed by the F-transform. Such a procedure removes the high frequency
components of the time series and produces a smooth version of the series. This is depicted in fig.
4. According to this fig., for prediction of time series at instant t, a window of time series with
length W, prior to point t are filtered using F-transform. Therefore a smoothed and a residual
series are resulted. The residual series contains high frequency variations of the original time
series (and is considered as noise in this paper). The smoothed series which contain main
information of the original time series is used as forecaster input. It’s worth noting that a separate
model can be developed for prediction of the residual series and then the final prediction can be
produced by summing up predicted values of the residual and smoothed series. This will be
considered in future works.
5. ENERGY CONSUMPTION PREDICTION
In this section the annual oil consumption of the United States and Canada and monthly gas
consumption in the U.S. will be forecasted using the proposed FT-LLNF model. These data are
collected from World Bank Development Indicator datasets and the U.S. energy information
administration website [22], [23]. For evaluation of the performance of the FT-LLNF model, the
Mean absolute percentage error (MAPE) and absolute percentage error (APE) are computed,
1
100 T t t
t
t
y y
MAPE
T y=
−
= ∑
)
(15)
100t t
t
y y
APE
y
−
= ×
)
(16)
where, ty and ty
)
are the actual and predicted consumptions at period t, respectively.
Fig. 5. Actual and predicted U.S. oil consumption. Blue: Actual, red dots: predictions, thick
purple: error
9. International Journal on Soft Computing ( IJSC ) Vol.2, No.4, November 2011
19
Table1. Compsrison of MAPE between different methods. The case of U.S. oil consumption
Method Training Test
MLP 1.65% 1.91%
LLNF 0.34% 0.54%
FT-LLNF 0.23% 0.35%
5.1. Prediction of Annual oil Consumption in the U.S. and Canada
Table 2. Actual and predicted values for U.S. oil consumption
Year Actual Forecast APE
2001 19648.71 19542.98 0.54%
2002 19761.3 19685.53 0.38%
2003 20033.5 20060.39 0.13%
2004 20731.15 20668.15 0.30%
2005 20802.16 20724.01 0.37%
Our first case study focuses on prediction of annual oil consumption in the U.S. and Canada as
two large consumer of crude oil in the world. For this purpose, crude oil consumption data from
1990 to 2005 are considered. The data points from 1990 to 2000 are selected for training the FT-
LLNF model and the remaining 5 data samples are used for testing model performance. The
standard input variables used in this study to construct the forecast model are annual population,
cost of crude oil import, gross domestic product (GDP), annual oil production and annual oil
consumption of the last period and the model's output is the annual oil consumption. Hence a
multivariate time series prediction is performed. It must be noted that only oil consumption series
are filtered using F-transform. Furthermore, test points are not subject to filtering. Consequently,
for filtering consumption series using F-transform, 15 basic functions were used to form the fuzzy
partition.
The predicted and actual oil consumptions of the U.S., for both training and test points are shown
in fig. 5. Obviously the proposed FT-LLNF model has tracked the consumption series with
remarkable accuracy. Furthermore, a numerical comparison between an optimized MLP network
(constructed and trained by authors for the purpose of comparison) and the LLNF model is
presented in table 1. The advantage of the proposed method over MLP is clear based on the
numerical comparison in table 1. Furthermore, the effectiveness of the F-transform filter can be
understood from this table. Interestingly, by integration of the F-transform filter into the hybrid
model, the MAPE for test data has been reduced from 0.54% to 0.35% (almost 35%
improvement). For a more detailed analysis on performance of the proposed method, the actual
and forecasted values for test points are tabulated in table 2.
Similar predictions are performed for annual oil consumption in Canada. The results of these
predictions are illustrated by fig. 6. Outstanding performance of the FT-LLNF model is
demonstrated by this fig., again. The model has noticeable performance for training and test data
as well as low prediction errors. Comparing the training and test MAPE of the FT-LLNF model
with an optimized MLP network and the LLNF model, as presented in table 3, confirms the
superiority of the proposed method and the noteworthy effect of the data pre-processing using F-
10. International Journal on Soft Computing ( IJSC ) Vol.2, No.4, November 2011
20
Table 3. Comparison of MAPE between different methods. The case of Canada oil consumption
Method Training Test
MLP 2.2% 3.1%
LLNF 0.88% 1.95%
FT-LLNF 0.63% 1.83%
Table 4. Actual and predicted values for Canada oil consumption
Year Actual Forecast APE
2001 2056.84 2057.56 0.035%
2002 2078.35 2137.67 2.85%
2003 2207.15 2141.40 2.98%
2004 2299.67 2228.76 3.08%
2005 2296.86 2307.78 0.47%
transform. Furthermore, the actual and predicted values of the test data and the relative errors are
shown in table 4.
5.2. Prediction of Monthly Natural Gas Consumption in the U.S.
In previous case studies annual energy consumptions were carried out and presented. In this case
study we intend to demonstrate model performance on prediction of short term energy
consumption. Therefore, we consider monthly gas consumption in the U.S. For this purpose,U.S.
monthly consumption of natural gas from Jan. 1973 to Dec. 2007 are collected from [22]. In
contrast to previous case studies, univariate time series prediction is carried out here; i.e., the
input of hybrid forecaster is only the previous values of natural gas consumption. The data from
Jan. 1973 to Dec 2009 are selected as training set. Hence, the natural gas consumption from Jan.
2008 to Dec. 2010 (i.e. three years) will be forecasted as test data.
Fig. 6. Actual and predicted Canada oil consumption. Blue: Actual, red dots: predictions,thick
purple: error
11. International Journal on Soft Computing ( IJSC ) Vol.2, No.4, November 2011
21
Fig. 7. Autocorrelation factor for U.S. monthly natural gas time series
In order to select proper input features, first an autocorrelation analysis is performed. The time
lags with largest autocorrelation factors (ACF) will be selected as input feathers of the forecaster.
The result of the autocorrelation analysis on training series is depicted in fig. 7. Interesting
findings can be extracted from fig. 7. First, time lags 1 and 2 are highly informative, as they show
the trend components of the consumption series. Besides, time lags 12, 13, 24 and 24 are other
important features which represent the cyclic components of the natural gas consumption series.
Fig. 8. Actual and predicted U.S. monthly natural gas consumption. Blue: Actual, dashed red:
prediction
12. International Journal on Soft Computing ( IJSC ) Vol.2, No.4, November 2011
22
Table 5. Comparison of MAPE between different methods. The case of U.S. natural gas
consumption
Method Training Test
MLP 5.5% 6.7%
LLNF 3.5% 4.1%
FT-LLNF 2.8% 3.8%
Hence, consumption at {1, 2, 12, 13, 24 and 25} months ago are employed for prediction of the
gas consumption at the current month. The model is then constructed using the selected 6 input
features.
The actual and predicted consumptions for training and test data are shown in fig. 8. A
comparison with an MLP network and the LLNF model is also presented in table 5. Fig. 8 and
table 4 reveal the noteworthy modelling and prediction capabilities of the proposed TF-LLNF
method and its superior performance.
5.3. Performance comparison
This sub-section provides an overview of the results of the three case studies. Fig. 9 shows the
MAPE of the MLP, LLNF and FT-LLNF for each case study. Clearly, the proposed approach
exhibits better performance in all forecasts. Superiority of the FT-LLNF approach over the LLNF
model demonstrates the favourable filtering effect of the F-transform on forecasting accuracy.
Another comparative study is carried out using MAPE ratio. This study is performed by dividing
the test MAPE of the proposed approach to the test MAPE of the compared methods (i.e. MLP
and LLNF). The values of MAPE ratio, computed for all case studies, are summarized in table 6.
There are far differences between MLP and the proposed methods in different case studies,
Fig. 9. Performance comparison of different models in three case studies
13. International Journal on Soft Computing ( IJSC ) Vol.2, No.4, November 2011
23
Table 6. Test MAPE ratio for all case studies
showing that the MLP model has not accurately modelled the nonlinearities of the energy
consumption series, compared to the two other approaches. LLNF and FT-LLNF models showed
comparable performance in case studies two and three. However the difference in their
performance is noticeable for the first case study. It is concluded here that proper adjustment of
the F-transform filter (e.g. shape and number of the basic functions), can considerably affect the
performance accuracy of the forecast model.
6. CONCLUSION
This paper proposed a forecast method based on local linear neuro fuzzy models and fuzzy
transform for energy consumption prediction. Combination of LLNF model, with distinguished
modelling and forecast abilities and the recently introduced technique of F-transform, with
noticeable noise removal properties, resulted in an effective forecaster which was used for
prediction of energy consumption. The proposed FT-LLNF model was evaluated through
different case studies. Prediction of annual and monthly energy consumption in the U.S. and
Canada demonstrated accurate performance of the FT-LLNF model. Furthermore, multivariate
and univariate predictions were also considered. Besides, comparison to an optimized MLP
network and the LLNF model revealed superior performance of the proposed model. Through
comparisons with LLNF model, the favourable data processing properties of the F-transform was
recognised. Interestingly, considerable improvement in prediction accuracy through integration of
F-transform into the forecast model was identified.
REFERENCES
[1] International Energy Agency (IEA) [online] Available: www.iea.org
[2] Harris, J.L. and Liu, L.M., 2002. Dynamic structural analysis and forecasting of residential electricity
consumption. International Journal of Forecasting, 9(4), 437-455.
[3] Edigera, V.S. and Akarb, S., 2007. ARIMA forecasting of primary energy demand by fuel in Turkey.
Energy Policy, 35(3), 1701-1708.
[4] Biancoa, V., Manca, O. and Nardinia, S., 2009. Electricity consumption forecasting in Italy using
linear regression models. Energy, 34(9), 1413-1421.
[5] Kumar, U., Jain, V.K., 2010. Time series models (Grey-Markov, Grey Model with rolling mechanism
and singular spectrum analysis) to forecast energy consumption in India. Energy, 35(4), 1709-1716.
[6] Lee, Y.S. and Tong, L.I., 2011. Forecasting energy consumption using a grey model improved by
incorporating genetic programming. Energy Conversion and Management, 52(1), 147-152.
[7] Zhou, P., Ang, B.W. and Poh, K.L., 2006. A trigonometric grey prediction approach to forecasting
electricity demand. Energy, 31(14), 2839-2847.
[8] Kulkarni, V.R., Mujawar , S. and Apte, S., 2010. Hash Function Implementation Using Artificial
Neural Network. IJSC, 1(1), 1-8.
[9] Obi, J.C. and Imainvan, A.A., 2011. Decision Support System for the Intelligient Identification of
Alzheimer using Neuro Fuzzy logic. IJICIC, 2(2), 25-38.
14. International Journal on Soft Computing ( IJSC ) Vol.2, No.4, November 2011
24
[10] Azadeh, A., Araba, R. and Behfarda, S., 2011. An adaptive intelligent algorithm for forecasting long
term gasoline demand estimation: The cases of USA, Canada, Japan, Kuwait and Iran. Expert
Systems with Applications, 37(12), 7427-7437.
[11] Azadeh, A., Khakestani, M. and Saberi, M., 2009. A flexible fuzzy regression algorithm for
forecasting oil consumption estimation. Energy Policy, 37(12), 5567-5579.
[12] Shakouri, H., Nadimia, R. and Ghaderia, F., 2008. A hybrid TSK-FR model to study short-term
variations of the electricity demand versus the temperature changes. Expert Systems with
Applications 36(2), Part 1, 1765-1772.
[13] Yokoyama, R., Wakui, T. and Satake, R., 2009. Prediction of energy demands using neural network
with model identification by global optimization. Energy Conversion and Management, 50(2), 319-
327.
[14] Chen, T., 2011.A collaborative fuzzy-neural approach for long-term load forecasting in Taiwan.
Computers & Industrial Engineering, doi:10.1016/j.cie.2011.06.003.
[15] Azadeh, A., Asadzadeh, S.M. and Ghanbari, A., 2010. An adaptive network-based fuzzy inference
system for short-term natural gas demand estimation: Uncertain and complex environments. Energy
Policy, 38(3), 1529-1536.
[16] Nosrati Maralloo, M., Koushki, A.R., Lucas, C. and Kalhor, A. Long term electrical load forecasting
via a neuro-fuzzy model. 14th International CSI Computer Conference, CSICC 2009, art. no.
5349440, 35-40.
[17] Suganthi, L. and Samuel, A., 2011. Energy models for demand forecasting- A review. Renewable and
Sustainable Energy Reviews.
[18] Nells, O., 2001. Nonlinear System Identification, Springer, Berlin, Germany.
[19] Perfilieva, I., 2006. Fuzzy transforms: theory and applications. Fuzzy Sets and Systems, 157(8), 993-
1023.
[20] Di Martino, F., Loia, V. and Sessa, S., 2010. A segmentation method for images compressed by fuzzy
transforms. Fuzzy Sets and Systems, 161(1), 56-74.
[21] Di Martino, F., Loia, V. and Sessa, S., 2010. Fuzzy transforms for compression and decompression of
color videos. Information Sciences, 180(20), 3914-3931.
[22] U.S. Energy Information Administration [online] available: http://www.eia.gov/.
[23] World Bank, 2006. World Development Indicators 20 06. World Bank, Washington, DC.