Inflation is a benchmark of a country's economic development. Inflation is very influential on various things, so forecasting inflation to know on upcoming inflation will impact positively. There are various methods used to perform forecasting, one of which is the fuzzy time series forecasting with maximum results. Fuzzy logical relationships (FLR) model is a very good in doing forecasting. However, there are some parameters that the value needs to be optimised. Interval is a parameter which is highly influence toward forecasting result. The utilizing optimization with hybrid automatic clustering and particle swarm optimization (ACPSO). Automatic clustering can do interval formation with just the right amount. While the PSO can optimise the value of each interval and it is providing maximum results. This study proposes the improvement in find the solution using auto-speed acceleration algorithm. Auto-speed acceleration algorithm can find a global solution which is hard to reach by the PSO and time of computation is faster. The results of the acquired solutions can provide the right interval so that the value of the FLR can perform forecasting with maximum results.
Regression Modelling for Precipitation Prediction Using Genetic AlgorithmsTELKOMNIKA JOURNAL
This paper discusses the formation of an appropriate regression model in precipitation prediction.
Precipitation prediction has a major influence to multiply the agricultural production of potatoes in Tengger,
East Java, Indonesia. Periodically, the precipitation has non-linear patterns. By using a non-linear
approach, the prediction of precipitation produces more accurate results. Genetic algorithm (GA)
functioning chooses precipitation period which forms the best model. To prevent early convergence,
testing the best combination value of crossover rate and mutation rate is done. To test the accuracy of the
predicted results are used Root Mean Square Error (RMSE) as a benchmark. Based on the RMSE value
of each method on every location, prediction using GA-Non-Linear Regression is better than Fuzzy
Tsukamoto for each location. Compared to Generalized Space-Time Autoregressive-Seemingly Unrelated
Regression (GSTAR-SUR), precipitation prediction using GA is better. This has been proved that for 3
locations GA is superior and on 1 location, GA has the least value of deviation level.
Resource levelling minimizes resource fluctuations by postpone the earliest start time (EST) of non-critical activities with corresponding floats. Float consumption for resource leveling may reduces the project completion probability. This paper presents a method to minimize the resource fluctuations with minimum impact of float consumption. A case study is presented to verify the validity and usability of the method.
Big 5 ASEAN capital markets forecasting using WEMA methodTELKOMNIKA JOURNAL
ASEAN through ASEAN Economics Community (AEC) 2020 treaty has proposed financial integration via capital markets integration in order to aim comprehensive ASEAN economic integration. Therefore, the need to have a proper prediction of ASEAN capital market becomes a major issue. In this study, we took big 5 ASEAN capital markets, i.e. Straits Times Index (STI), Kuala Lumpur Stock Exchange (KLSE), Stock Exchange of Thailand (SET), Jakarta Stock Exchange (JKSE), and Philippine Stock Exchange (PSE) to be forecasted using WEMA method. Weighted Exponential Moving Average (WEMA) is a new hybrid moving average method which combines the weighting factor calculation in Weighted Moving Average (WMA) with the procedure of Exponential Moving Average (EMA). WEMA has successfully been implemented and used to forecaste discrete time series data, but never being used to forecast ASEAN capital markets. In this study, we took further action by implementing the WEMA method with brute force approach for scaling factor tuning on big 5 ASEAN capital markets. From the experimental results, we found that WEMA has successfully forecasted all those exchanges. By looking at the forecast error measurement, it gives the best performance on PSE and worst performance on SET dataset among all datasets being considered in this study.
A Novel Forecasting Based on Automatic-optimized Fuzzy Time SeriesTELKOMNIKA JOURNAL
In this paper, we propose a new method for forecasting based on automatic-optimized fuzzy time
series to forecast Indonesia Inflation Rate (IIR). First, we propose the forecasting model of two-factor highorder
fuzzy-trend logical relationships groups (THFLGs) for predicting the IIR. Second, we propose the
interval optimization using automatic clustering and particle swarm optimization (ACPSO) to optimize the
interval of main factor IIR and secondary factor SF, where SF = {Customer Price Index (CPI), the Bank of
Indonesia (BI) Rate, Rupiah Indonesia /US Dollar (IDR/USD) Exchange rate, Money Supply}. The
proposed method gets lower root mean square error (RMSE) than previous methods.
This document discusses using particle swarm optimization (PSO) to design optimal close-range photogrammetry networks. PSO is introduced as a heuristic optimization algorithm inspired by bird flocking behavior that can be used to solve complex optimization problems. The document then provides an overview of close-range photogrammetry network design and the four design stages. It explains that PSO will be used to optimize the first stage of determining optimal camera station positions. Mathematical models of PSO for close-range photogrammetry network design are developed. Experimental tests are carried out to develop a PSO algorithm that can determine optimum camera positions and evaluate the accuracy of the developed network.
Prediction of passenger train using fuzzy time series and percentage change m...journalBEEI
In the subject of railway operation, predicting railway passenger volume has always been a hot topic. Accurately forecasting railway passenger volume is the foundation for railway transportation companies to optimize transit efficiency and revenue. The goal of this research is to use a combination of the fuzzy time series approach based on the rate of change algorithm and the Holt double exponential smoothing method to forecast the number of train passengers. In contrast to prior investigations, we focus primarily on determining the next time period in this research. The fuzzy time series is employed as the forecasting basis, the rate of change is used to build the set of universes, and the Holt's double exponential smoothing method is utilized to forecast the following period in this case study. The number of railway passengers predicted for January 2020 is 38199, with a tiny average forecasting error rate of 0.89 percent and a mean square error of 131325. It can also help rail firms identify future passenger needs, which can be used to decide whether to expand train cars or run new trains, as well as how to distribute tickets.
https://utilitasmathematica.com/index.php/Index/
Utilitas Mathematica by making commits to strengthening our professional community. This journal is published by Utilitas Mathematica Academy provides all over. Utilitas Mathematica international level in terms of research provides worldwide.
https://utilitasmathematica.com/index.php/Index/
Utilitas Mathematica by making commits to strengthening our professional community. This journal is published by Utilitas Mathematica Academy provided all over. Utilitas Mathematica international level in terms of research provides worldwide.
https://goo.gl/maps/Pkz14omBWry4czBcA
Box 7, University Centre University of Manitoba Winnipeg, Manitoba R3T 2N2
Regression Modelling for Precipitation Prediction Using Genetic AlgorithmsTELKOMNIKA JOURNAL
This paper discusses the formation of an appropriate regression model in precipitation prediction.
Precipitation prediction has a major influence to multiply the agricultural production of potatoes in Tengger,
East Java, Indonesia. Periodically, the precipitation has non-linear patterns. By using a non-linear
approach, the prediction of precipitation produces more accurate results. Genetic algorithm (GA)
functioning chooses precipitation period which forms the best model. To prevent early convergence,
testing the best combination value of crossover rate and mutation rate is done. To test the accuracy of the
predicted results are used Root Mean Square Error (RMSE) as a benchmark. Based on the RMSE value
of each method on every location, prediction using GA-Non-Linear Regression is better than Fuzzy
Tsukamoto for each location. Compared to Generalized Space-Time Autoregressive-Seemingly Unrelated
Regression (GSTAR-SUR), precipitation prediction using GA is better. This has been proved that for 3
locations GA is superior and on 1 location, GA has the least value of deviation level.
Resource levelling minimizes resource fluctuations by postpone the earliest start time (EST) of non-critical activities with corresponding floats. Float consumption for resource leveling may reduces the project completion probability. This paper presents a method to minimize the resource fluctuations with minimum impact of float consumption. A case study is presented to verify the validity and usability of the method.
Big 5 ASEAN capital markets forecasting using WEMA methodTELKOMNIKA JOURNAL
ASEAN through ASEAN Economics Community (AEC) 2020 treaty has proposed financial integration via capital markets integration in order to aim comprehensive ASEAN economic integration. Therefore, the need to have a proper prediction of ASEAN capital market becomes a major issue. In this study, we took big 5 ASEAN capital markets, i.e. Straits Times Index (STI), Kuala Lumpur Stock Exchange (KLSE), Stock Exchange of Thailand (SET), Jakarta Stock Exchange (JKSE), and Philippine Stock Exchange (PSE) to be forecasted using WEMA method. Weighted Exponential Moving Average (WEMA) is a new hybrid moving average method which combines the weighting factor calculation in Weighted Moving Average (WMA) with the procedure of Exponential Moving Average (EMA). WEMA has successfully been implemented and used to forecaste discrete time series data, but never being used to forecast ASEAN capital markets. In this study, we took further action by implementing the WEMA method with brute force approach for scaling factor tuning on big 5 ASEAN capital markets. From the experimental results, we found that WEMA has successfully forecasted all those exchanges. By looking at the forecast error measurement, it gives the best performance on PSE and worst performance on SET dataset among all datasets being considered in this study.
A Novel Forecasting Based on Automatic-optimized Fuzzy Time SeriesTELKOMNIKA JOURNAL
In this paper, we propose a new method for forecasting based on automatic-optimized fuzzy time
series to forecast Indonesia Inflation Rate (IIR). First, we propose the forecasting model of two-factor highorder
fuzzy-trend logical relationships groups (THFLGs) for predicting the IIR. Second, we propose the
interval optimization using automatic clustering and particle swarm optimization (ACPSO) to optimize the
interval of main factor IIR and secondary factor SF, where SF = {Customer Price Index (CPI), the Bank of
Indonesia (BI) Rate, Rupiah Indonesia /US Dollar (IDR/USD) Exchange rate, Money Supply}. The
proposed method gets lower root mean square error (RMSE) than previous methods.
This document discusses using particle swarm optimization (PSO) to design optimal close-range photogrammetry networks. PSO is introduced as a heuristic optimization algorithm inspired by bird flocking behavior that can be used to solve complex optimization problems. The document then provides an overview of close-range photogrammetry network design and the four design stages. It explains that PSO will be used to optimize the first stage of determining optimal camera station positions. Mathematical models of PSO for close-range photogrammetry network design are developed. Experimental tests are carried out to develop a PSO algorithm that can determine optimum camera positions and evaluate the accuracy of the developed network.
Prediction of passenger train using fuzzy time series and percentage change m...journalBEEI
In the subject of railway operation, predicting railway passenger volume has always been a hot topic. Accurately forecasting railway passenger volume is the foundation for railway transportation companies to optimize transit efficiency and revenue. The goal of this research is to use a combination of the fuzzy time series approach based on the rate of change algorithm and the Holt double exponential smoothing method to forecast the number of train passengers. In contrast to prior investigations, we focus primarily on determining the next time period in this research. The fuzzy time series is employed as the forecasting basis, the rate of change is used to build the set of universes, and the Holt's double exponential smoothing method is utilized to forecast the following period in this case study. The number of railway passengers predicted for January 2020 is 38199, with a tiny average forecasting error rate of 0.89 percent and a mean square error of 131325. It can also help rail firms identify future passenger needs, which can be used to decide whether to expand train cars or run new trains, as well as how to distribute tickets.
https://utilitasmathematica.com/index.php/Index/
Utilitas Mathematica by making commits to strengthening our professional community. This journal is published by Utilitas Mathematica Academy provides all over. Utilitas Mathematica international level in terms of research provides worldwide.
https://utilitasmathematica.com/index.php/Index/
Utilitas Mathematica by making commits to strengthening our professional community. This journal is published by Utilitas Mathematica Academy provided all over. Utilitas Mathematica international level in terms of research provides worldwide.
https://goo.gl/maps/Pkz14omBWry4czBcA
Box 7, University Centre University of Manitoba Winnipeg, Manitoba R3T 2N2
Rule Optimization of Fuzzy Inference System Sugeno using Evolution Strategy f...IJECEIAES
The need for accurate load forecasts will increase in the future because of the dramatic changes occurring in the electricity consumption. Sugeno fuzzy inference system (FIS) can be used for short-term load forecasting. However, challenges in the electrical load forecasting are the data used the data trend. Therefore, it is difficult to develop appropriate fuzzy rules for Sugeno FIS. This paper proposes Evolution Strategy method to determine appropriate rules for Sugeno FIS that have minimum forecasting error. Root Mean Square Error (RMSE) is used to evaluate the goodness of the forecasting result. The numerical experiments show the effectiveness of the proposed optimized Sugeno FIS for several test-case problems. The optimized Sugeno FIS produce lower RMSE comparable to those achieved by other wellknown method in the literature.
This document describes a study that develops a fuzzy inference system (FIS) to assess the sustainability of biomass production for energy purposes. The FIS uses four input parameters - energy output, energy balance ratio, fertilizer usage, and pesticide usage - with defined membership functions. Eighty-one IF-THEN rules were created relating the input parameters to a single output parameter, a fuzzy sustainability index (FSI). The FSI indicates the sustainability level as very low, low, medium, high or very high. The FIS provides a means to evaluate biomass sustainability that can handle uncertain input data, unlike other assessment methods. Graphs show the relationship between input parameters and the fuzzy output based on the rules.
Implementing lee's model to apply fuzzy time series in forecasting bitcoin priceCSITiaesprime
Over time, cryptocurrencies like Bitcoin have attracted investor's and speculators' interest. Bitcoin's dramatic rise in value in recent years has caught the attention of many who see it as a promising investment asset. After all, Bitcoin investment is inseparable from Bitcoin price volatility that investors must mitigate. This research aims to use Lee's Fuzzy Time Series approach to forecast the price of Bitcoin. A time series analysis method called Lee's Fuzzy Time Series to get around ambiguity and uncertainty in time series data. Ching-Cheng Lee first introduced this approach in his research on time series prediction. This method is a development of several previous fuzzy time series (FTS) models, namely Song and Chissom and Cheng and Chen. According to most previous studies, Lee's model was stated to be able to convey more precise forecasting results than the classic model from the FTS. This study used first and second orders, where researchers obtained error values from the first order of 5.419% and the second order of 4.042%, which means that the forecasting results are excellent. But of both orders, only the first order can be used to predict the next period's Bitcoin price. In the second order, the resulting relations in the next period do not have groups in their fuzzy logical relationship group (FLRG), so they can not predict the price in the next period. This study contributes to considering investors and the general public as a factor in keeping, selling, or purchasing cryptocurrencies.
Brown’s Weighted Exponential Moving Average Implementation in Forex ForecastingTELKOMNIKA JOURNAL
In 2016, a time series forecasting technique which combined the weighting factor calculation formula found in weighted moving average with Brown’s double exponential smoothing procedures had been introduced. The technique is known as Brown’s weighted exponential moving average (B-WEMA), as a new variant of double exponential smoothing method which does the exponential filter processes twice. In this research, we will try to implement the new method to forecast some foreign exchange, or known as forex data, including EUR/USD, AUD/USD, GBP/USD, USD/JPY, and EUR/JPY data. The time series data forecasting results using B-WEMA then be compared with other conventional and hybrid moving average methods, such as weighted moving average (WMA), exponential moving average (EMA), and Brown’s double exponential smoothing (B-DES). The comparison results show that B-WEMA has a better accuracy level than other forecasting methods used in this research.
This powerpoint presentation was done as part of the course STAT 591 titled Mater's Seminar during Third semester of MSc. Agricultural Statistics at Agricultural College, Bapatla under ANGRAU, Andhra Pradesh.
Understanding the effect of Financing Variability Using Chance- Constrained A...IRJET Journal
1) The document discusses using chance-constrained programming (CCP) to model the impact of financing variability on time-cost tradeoffs in construction projects. CCP allows incorporating the probability of events into an optimization model.
2) Previous studies have used linear programming and other approaches to address time-cost tradeoffs but have not considered uncertainties like financing variability.
3) The study aims to develop a new mathematical model that comprehensively addresses precedence constraints, financing variability, and time-cost optimization for construction projects. CCP is used to quantify the effect of cost uncertainty from variable financing.
Fuzzy sequential model for strategic planning of small and medium scale indus...TELKOMNIKA JOURNAL
The use of strategic planning can be an alternative solution to improve industrial performance. In small and medium scale industries especially in apple chips industries, strategic planning helps to know the current industry situation and the steps that must be taken to overcome the existing problems. This study aimed to develop an improvement strategies using Fuzzy Sequential Modeling (FSM) model. FSM model was consisted a SWOT analysis, Root Cause Analysis (RCA), Bolden’s Taxonomy and fuzzy AHP. Based on SWOT analysis, the external factors of threats was the similar business competition and low purchasing power. RCA described the issues that needed to be fixed using Bolden’s Taxonomy as the reference for determining the action plans and produce four OIA (Open Improvement Area) there are old technology machines and equipment, difficulty of enterprise development, ineffective marketing media and low market share. The strategic planning was determined using Fuzzy AHP based on OIA and ABC enterprise needs to improve the low market share and ineffective marketing strategy.
IRJET- GDP Forecast for India using Mixed Data Sampling TechniqueIRJET Journal
This document describes a study that aimed to forecast India's GDP using a mixed data sampling (MIDAS) technique. It first conducted a preliminary study to identify macroeconomic indicators that affect India's GDP. It then used dynamic factor modeling to identify the most relevant predictors from the collected data. Twenty-two predictors varying in frequency (quarterly, monthly, weekly, daily) were identified. The MIDAS technique was then used to obtain a GDP forecast incorporating predictors of different frequencies, without averaging them to a single frequency. The forecast was compared to one obtained using a traditional regression method. The accuracy of the two forecasts was assessed by calculating forecast errors and conducting statistical tests. The results suggest the MIDAS technique provided a more accurate
IRJET- Comparative Analysis between Critical Path Method and Monte Carlo S...IRJET Journal
This document compares the Critical Path Method (CPM) and Monte Carlo simulation for project scheduling. CPM uses deterministic activity durations to calculate the critical path and project duration. Monte Carlo simulation incorporates uncertainty by using three time estimates per activity - optimistic, pessimistic, and most likely - represented as probability distributions. It runs thousands of simulations to determine the likely project duration based on random sampling from these distributions. The document reviews literature on applying Monte Carlo simulation in construction projects. It then describes a study that uses both CPM and Monte Carlo simulation on a real construction project to compare the results and evaluate Monte Carlo simulation's usefulness for the construction industry.
Forecasting stock price movement direction by machine learning algorithmIJECEIAES
Forecasting stock price movement direction (SPMD) is an essential issue for short-term investors and a hot topic for researchers. It is a real challenge concerning the efficient market hypothesis that historical data would not be helpful in forecasting because it is already reflected in prices. Some commonly-used classical methods are based on statistics and econometric models. However, forecasting becomes more complicated when the variables in the model are all nonstationary, and the relationships between the variables are sometimes very weak or simultaneous. The continuous development of powerful algorithms features in machine learning and artificial intelligence has opened a promising new direction. This study compares the predictive ability of three forecasting models, including support vector machine (SVM), artificial neural networks (ANN), and logistic regression. The data used is those of the stocks in the VN30 basket with a holding period of one day. With the rolling window method, this study got a highly predictive SVM with an average accuracy of 92.48%.
Comparison of exponential smoothing and neural network method to forecast ric...TELKOMNIKA JOURNAL
Rice is the most important food commodity in Indonesia. In order to achieve affordability, and the fulfillment of the national food consumption according to the Indonesia law no. 18 of 2012, Indonesia needs information to support the government's policy regarding the collection, processing, analyzing, storing, presenting and disseminating. One manifestation of the Information availability to support the government’s policy is forecasting. Exponential smoothing and neural network methods are commonly used to forecasting because it provides a satisfactory result. Our study are comparing the variants of exponential and backpropagation model as a neural network to forecast rice production. The evaluation is summarized by utilizing Mean Square Percentage Error (MAPE), Mean Square Error (MSE). The results show that neural network method is preferable than the statistics method since it has lower MSE and MAPE values than statistics method.
The document discusses critical path analysis (CPM) and PERT, which are project management techniques. CPM breaks a project into tasks, displays them in a flow chart, and calculates the project duration based on time estimates for each task. It identifies critical tasks that are time-sensitive. PERT examines task schedules to determine a CPM variation. It analyzes the time required for each task and dependencies to determine the minimum time to complete a project. PERT uses a formula that combines three time estimates - optimistic, most likely, and pessimistic - to calculate the average task duration.
Optimizing Effort Parameter of COCOMO II Using Particle Swarm Optimization Me...TELKOMNIKA JOURNAL
Estimating the effort and cost of software is an important activity for software project managers. A poor estimate (overestimates or underestimates) will result in poor software project management. To handle this problem, many researchers have proposed various models for estimating software cost. Constructive Cost Model II (COCOMO II) is one of the best known and widely used models for estimating software costs. To estimate the cost of a software project, the COCOMO II model uses software size, cost drivers, scale factors as inputs. However, this model is still lacking in terms of accuracy. To improve the accuracy of COCOMO II model, this study examines the effect of the cost factor and scale factor in improving the accuracy of effort estimation. In this study, we initialized using Particle Swarm Optimization (PSO) to optimize the parameters in a model of COCOMO II. The method proposed is implemented using the Turkish Software Industry dataset which has 12 data items. The method can handle improper and uncertain inputs efficiently, as well as improves the reliability of software effort. The experiment results by MMRE were 34.1939%, indicating better high accuracy and significantly minimizing error 698.9461% and 104.876%.
Gross Domestic Product (GDP) has become the single best indicator of economic growth and
represents the total worth of all goods and services produced within a country's boundaries in a specific year.
On the other hand, GDP per capita, is mostly related to the living standard through time. In this work, we focus
on the Greek GDP for the period 1971-2020, and we using the Box-Jenkins approach (BJ) to build an
Autoregressive-Integrated Moving-Average (ARIMA) model. GDP data was collected in annual form from the
World Bank with values in constant USD2015. The results indicate that the ARIMA(1,2,1) is the most
appropriate model, given model selection criteria. The work aims to contribute at the limited literature related
to the Greek GDP and ARIMA, and show some direction for future research.
IRJET- Overview of Forecasting TechniquesIRJET Journal
This document provides an overview of different forecasting techniques, including qualitative and quantitative methods. It discusses several qualitative techniques like the Delphi method, consumer market surveys, and jury of executive opinion. It also examines various quantitative techniques such as the moving average method, weighted moving average method, exponential smoothing, and least squares. The document serves to introduce students to common forecasting approaches and provide examples of each type of technique.
With the development of the urbanization, industrialization and populace, there has been a huge development in the rush hour gridlock. With development in the rush hour gridlock, there got a heap of issues with it as well, these issues incorporate congested roads, mishaps and movement govern infringement at the overwhelming activity signals. This thusly adversy affects the economy of the nation and in addition the loss of lives. Thus, Speed control is in the need of great importance because of the expanded rate of mishaps announced in our everyday life. The criminal traffic offense expanded due to over movement on streets. The reason is rapid of vehicles. The speed of the vehicles is past the normal speed confine is called speed infringement. In this paper diverse issues are confronted that are given in issue detailing. Every one of these issues are in future with the assistance of the fortification learning issue and advancement issue the changed neural system is contemplated with NN calculations forward Chaining back spread . Omesh Goyal | Chamkour Singh ""A Review on Traffic Signal Identification"" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-4 , June 2019, URL: https://www.ijtsrd.com/papers/ijtsrd23557.pdf
Paper URL: https://www.ijtsrd.com/engineering/computer-engineering/23557/a-review-on-traffic-signal-identification/omesh-goyal
Integration Method of Local-global SVR and Parallel Time Variant PSO in Water...TELKOMNIKA JOURNAL
Flood is one type of natural disaster that can’t be predicted, one of the main causes of flooding is the continuous rain (natural events). In terms of meteorology, the cause of flood is come from high rainfall and the high tide of the sea, resulting in increased the water level. Rainfall and water level analysis in each period, still not able to solve the existing problems. Therefore in this study, the proposed integration method of Parallel Time Variant PSO (PTVPSO) and Local-Global Support Vector Regression (SVR) is used to forecast water level. Implementation in this study combine SVR as regression method for forecast the water level, Local-Global concept take the role for the minimization for the computing time, while PTVPSO used in the SVR to obtain maximum performance and higher accurate result by optimize the parameters of SVR. Hopefully this system will be able to solve the existing problems for flood early warning system due to erratic weather.
This document summarizes a research paper that proposes a new inventory prediction method for supply chain management called BP-GA chaos prediction algorithm. The method uses a backpropagation neural network combined with a genetic algorithm to forecast inventory levels based on chaotic time series analysis. It aims to overcome limitations of traditional chaos prediction approaches. The paper reviews other inventory forecasting research and chaotic prediction methods. It then describes the new hybrid BP-GA method in detail, which establishes a chaotic neural network model optimized through a genetic algorithm. An experiment applying this method to inventory prediction is said to achieve good results.
"Fuzzy based approach for weather advisory system”iosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
This document describes a proposed fuzzy logic-based weather advisory system. It begins with background on how agricultural productivity depends on weather and how weather forecasts can help farmers plan. It then discusses using a fuzzy approach to develop the system given the uncertain nature of weather. The rest of the document reviews related work applying fuzzy logic to weather-related problems and provides details on fuzzy logic and how it can model imprecise systems through membership functions and if-then rules. The overall goal is to develop a weather advisory system to help farmers minimize losses from adverse conditions.
Amazon products reviews classification based on machine learning, deep learni...TELKOMNIKA JOURNAL
In recent times, the trend of online shopping through e-commerce stores and websites has grown to a huge extent. Whenever a product is purchased on an e-commerce platform, people leave their reviews about the product. These reviews are very helpful for the store owners and the product’s manufacturers for the betterment of their work process as well as product quality. An automated system is proposed in this work that operates on two datasets D1 and D2 obtained from Amazon. After certain preprocessing steps, N-gram and word embedding-based features are extracted using term frequency-inverse document frequency (TF-IDF), bag of words (BoW) and global vectors (GloVe), and Word2vec, respectively. Four machine learning (ML) models support vector machines (SVM), logistic regression (RF), logistic regression (LR), multinomial Naïve Bayes (MNB), two deep learning (DL) models convolutional neural network (CNN), long-short term memory (LSTM), and standalone bidirectional encoder representations (BERT) are used to classify reviews as either positive or negative. The results obtained by the standard ML, DL models and BERT are evaluated using certain performance evaluation measures. BERT turns out to be the best-performing model in the case of D1 with an accuracy of 90% on features derived by word embedding models while the CNN provides the best accuracy of 97% upon word embedding features in the case of D2. The proposed model shows better overall performance on D2 as compared to D1.
Design, simulation, and analysis of microstrip patch antenna for wireless app...TELKOMNIKA JOURNAL
In this study, a microstrip patch antenna that works at 3.6 GHz was built and tested to see how well it works. In this work, Rogers RT/Duroid 5880 has been used as the substrate material, with a dielectric permittivity of 2.2 and a thickness of 0.3451 mm; it serves as the base for the examined antenna. The computer simulation technology (CST) studio suite is utilized to show the recommended antenna design. The goal of this study was to get a more extensive transmission capacity, a lower voltage standing wave ratio (VSWR), and a lower return loss, but the main goal was to get a higher gain, directivity, and efficiency. After simulation, the return loss, gain, directivity, bandwidth, and efficiency of the supplied antenna are found to be -17.626 dB, 9.671 dBi, 9.924 dBi, 0.2 GHz, and 97.45%, respectively. Besides, the recreation uncovered that the transfer speed side-lobe level at phi was much better than those of the earlier works, at -28.8 dB, respectively. Thus, it makes a solid contender for remote innovation and more robust communication.
More Related Content
Similar to Improve Interval Optimization of FLR Using Auto-speed Acceleration Algorithm
Rule Optimization of Fuzzy Inference System Sugeno using Evolution Strategy f...IJECEIAES
The need for accurate load forecasts will increase in the future because of the dramatic changes occurring in the electricity consumption. Sugeno fuzzy inference system (FIS) can be used for short-term load forecasting. However, challenges in the electrical load forecasting are the data used the data trend. Therefore, it is difficult to develop appropriate fuzzy rules for Sugeno FIS. This paper proposes Evolution Strategy method to determine appropriate rules for Sugeno FIS that have minimum forecasting error. Root Mean Square Error (RMSE) is used to evaluate the goodness of the forecasting result. The numerical experiments show the effectiveness of the proposed optimized Sugeno FIS for several test-case problems. The optimized Sugeno FIS produce lower RMSE comparable to those achieved by other wellknown method in the literature.
This document describes a study that develops a fuzzy inference system (FIS) to assess the sustainability of biomass production for energy purposes. The FIS uses four input parameters - energy output, energy balance ratio, fertilizer usage, and pesticide usage - with defined membership functions. Eighty-one IF-THEN rules were created relating the input parameters to a single output parameter, a fuzzy sustainability index (FSI). The FSI indicates the sustainability level as very low, low, medium, high or very high. The FIS provides a means to evaluate biomass sustainability that can handle uncertain input data, unlike other assessment methods. Graphs show the relationship between input parameters and the fuzzy output based on the rules.
Implementing lee's model to apply fuzzy time series in forecasting bitcoin priceCSITiaesprime
Over time, cryptocurrencies like Bitcoin have attracted investor's and speculators' interest. Bitcoin's dramatic rise in value in recent years has caught the attention of many who see it as a promising investment asset. After all, Bitcoin investment is inseparable from Bitcoin price volatility that investors must mitigate. This research aims to use Lee's Fuzzy Time Series approach to forecast the price of Bitcoin. A time series analysis method called Lee's Fuzzy Time Series to get around ambiguity and uncertainty in time series data. Ching-Cheng Lee first introduced this approach in his research on time series prediction. This method is a development of several previous fuzzy time series (FTS) models, namely Song and Chissom and Cheng and Chen. According to most previous studies, Lee's model was stated to be able to convey more precise forecasting results than the classic model from the FTS. This study used first and second orders, where researchers obtained error values from the first order of 5.419% and the second order of 4.042%, which means that the forecasting results are excellent. But of both orders, only the first order can be used to predict the next period's Bitcoin price. In the second order, the resulting relations in the next period do not have groups in their fuzzy logical relationship group (FLRG), so they can not predict the price in the next period. This study contributes to considering investors and the general public as a factor in keeping, selling, or purchasing cryptocurrencies.
Brown’s Weighted Exponential Moving Average Implementation in Forex ForecastingTELKOMNIKA JOURNAL
In 2016, a time series forecasting technique which combined the weighting factor calculation formula found in weighted moving average with Brown’s double exponential smoothing procedures had been introduced. The technique is known as Brown’s weighted exponential moving average (B-WEMA), as a new variant of double exponential smoothing method which does the exponential filter processes twice. In this research, we will try to implement the new method to forecast some foreign exchange, or known as forex data, including EUR/USD, AUD/USD, GBP/USD, USD/JPY, and EUR/JPY data. The time series data forecasting results using B-WEMA then be compared with other conventional and hybrid moving average methods, such as weighted moving average (WMA), exponential moving average (EMA), and Brown’s double exponential smoothing (B-DES). The comparison results show that B-WEMA has a better accuracy level than other forecasting methods used in this research.
This powerpoint presentation was done as part of the course STAT 591 titled Mater's Seminar during Third semester of MSc. Agricultural Statistics at Agricultural College, Bapatla under ANGRAU, Andhra Pradesh.
Understanding the effect of Financing Variability Using Chance- Constrained A...IRJET Journal
1) The document discusses using chance-constrained programming (CCP) to model the impact of financing variability on time-cost tradeoffs in construction projects. CCP allows incorporating the probability of events into an optimization model.
2) Previous studies have used linear programming and other approaches to address time-cost tradeoffs but have not considered uncertainties like financing variability.
3) The study aims to develop a new mathematical model that comprehensively addresses precedence constraints, financing variability, and time-cost optimization for construction projects. CCP is used to quantify the effect of cost uncertainty from variable financing.
Fuzzy sequential model for strategic planning of small and medium scale indus...TELKOMNIKA JOURNAL
The use of strategic planning can be an alternative solution to improve industrial performance. In small and medium scale industries especially in apple chips industries, strategic planning helps to know the current industry situation and the steps that must be taken to overcome the existing problems. This study aimed to develop an improvement strategies using Fuzzy Sequential Modeling (FSM) model. FSM model was consisted a SWOT analysis, Root Cause Analysis (RCA), Bolden’s Taxonomy and fuzzy AHP. Based on SWOT analysis, the external factors of threats was the similar business competition and low purchasing power. RCA described the issues that needed to be fixed using Bolden’s Taxonomy as the reference for determining the action plans and produce four OIA (Open Improvement Area) there are old technology machines and equipment, difficulty of enterprise development, ineffective marketing media and low market share. The strategic planning was determined using Fuzzy AHP based on OIA and ABC enterprise needs to improve the low market share and ineffective marketing strategy.
IRJET- GDP Forecast for India using Mixed Data Sampling TechniqueIRJET Journal
This document describes a study that aimed to forecast India's GDP using a mixed data sampling (MIDAS) technique. It first conducted a preliminary study to identify macroeconomic indicators that affect India's GDP. It then used dynamic factor modeling to identify the most relevant predictors from the collected data. Twenty-two predictors varying in frequency (quarterly, monthly, weekly, daily) were identified. The MIDAS technique was then used to obtain a GDP forecast incorporating predictors of different frequencies, without averaging them to a single frequency. The forecast was compared to one obtained using a traditional regression method. The accuracy of the two forecasts was assessed by calculating forecast errors and conducting statistical tests. The results suggest the MIDAS technique provided a more accurate
IRJET- Comparative Analysis between Critical Path Method and Monte Carlo S...IRJET Journal
This document compares the Critical Path Method (CPM) and Monte Carlo simulation for project scheduling. CPM uses deterministic activity durations to calculate the critical path and project duration. Monte Carlo simulation incorporates uncertainty by using three time estimates per activity - optimistic, pessimistic, and most likely - represented as probability distributions. It runs thousands of simulations to determine the likely project duration based on random sampling from these distributions. The document reviews literature on applying Monte Carlo simulation in construction projects. It then describes a study that uses both CPM and Monte Carlo simulation on a real construction project to compare the results and evaluate Monte Carlo simulation's usefulness for the construction industry.
Forecasting stock price movement direction by machine learning algorithmIJECEIAES
Forecasting stock price movement direction (SPMD) is an essential issue for short-term investors and a hot topic for researchers. It is a real challenge concerning the efficient market hypothesis that historical data would not be helpful in forecasting because it is already reflected in prices. Some commonly-used classical methods are based on statistics and econometric models. However, forecasting becomes more complicated when the variables in the model are all nonstationary, and the relationships between the variables are sometimes very weak or simultaneous. The continuous development of powerful algorithms features in machine learning and artificial intelligence has opened a promising new direction. This study compares the predictive ability of three forecasting models, including support vector machine (SVM), artificial neural networks (ANN), and logistic regression. The data used is those of the stocks in the VN30 basket with a holding period of one day. With the rolling window method, this study got a highly predictive SVM with an average accuracy of 92.48%.
Comparison of exponential smoothing and neural network method to forecast ric...TELKOMNIKA JOURNAL
Rice is the most important food commodity in Indonesia. In order to achieve affordability, and the fulfillment of the national food consumption according to the Indonesia law no. 18 of 2012, Indonesia needs information to support the government's policy regarding the collection, processing, analyzing, storing, presenting and disseminating. One manifestation of the Information availability to support the government’s policy is forecasting. Exponential smoothing and neural network methods are commonly used to forecasting because it provides a satisfactory result. Our study are comparing the variants of exponential and backpropagation model as a neural network to forecast rice production. The evaluation is summarized by utilizing Mean Square Percentage Error (MAPE), Mean Square Error (MSE). The results show that neural network method is preferable than the statistics method since it has lower MSE and MAPE values than statistics method.
The document discusses critical path analysis (CPM) and PERT, which are project management techniques. CPM breaks a project into tasks, displays them in a flow chart, and calculates the project duration based on time estimates for each task. It identifies critical tasks that are time-sensitive. PERT examines task schedules to determine a CPM variation. It analyzes the time required for each task and dependencies to determine the minimum time to complete a project. PERT uses a formula that combines three time estimates - optimistic, most likely, and pessimistic - to calculate the average task duration.
Optimizing Effort Parameter of COCOMO II Using Particle Swarm Optimization Me...TELKOMNIKA JOURNAL
Estimating the effort and cost of software is an important activity for software project managers. A poor estimate (overestimates or underestimates) will result in poor software project management. To handle this problem, many researchers have proposed various models for estimating software cost. Constructive Cost Model II (COCOMO II) is one of the best known and widely used models for estimating software costs. To estimate the cost of a software project, the COCOMO II model uses software size, cost drivers, scale factors as inputs. However, this model is still lacking in terms of accuracy. To improve the accuracy of COCOMO II model, this study examines the effect of the cost factor and scale factor in improving the accuracy of effort estimation. In this study, we initialized using Particle Swarm Optimization (PSO) to optimize the parameters in a model of COCOMO II. The method proposed is implemented using the Turkish Software Industry dataset which has 12 data items. The method can handle improper and uncertain inputs efficiently, as well as improves the reliability of software effort. The experiment results by MMRE were 34.1939%, indicating better high accuracy and significantly minimizing error 698.9461% and 104.876%.
Gross Domestic Product (GDP) has become the single best indicator of economic growth and
represents the total worth of all goods and services produced within a country's boundaries in a specific year.
On the other hand, GDP per capita, is mostly related to the living standard through time. In this work, we focus
on the Greek GDP for the period 1971-2020, and we using the Box-Jenkins approach (BJ) to build an
Autoregressive-Integrated Moving-Average (ARIMA) model. GDP data was collected in annual form from the
World Bank with values in constant USD2015. The results indicate that the ARIMA(1,2,1) is the most
appropriate model, given model selection criteria. The work aims to contribute at the limited literature related
to the Greek GDP and ARIMA, and show some direction for future research.
IRJET- Overview of Forecasting TechniquesIRJET Journal
This document provides an overview of different forecasting techniques, including qualitative and quantitative methods. It discusses several qualitative techniques like the Delphi method, consumer market surveys, and jury of executive opinion. It also examines various quantitative techniques such as the moving average method, weighted moving average method, exponential smoothing, and least squares. The document serves to introduce students to common forecasting approaches and provide examples of each type of technique.
With the development of the urbanization, industrialization and populace, there has been a huge development in the rush hour gridlock. With development in the rush hour gridlock, there got a heap of issues with it as well, these issues incorporate congested roads, mishaps and movement govern infringement at the overwhelming activity signals. This thusly adversy affects the economy of the nation and in addition the loss of lives. Thus, Speed control is in the need of great importance because of the expanded rate of mishaps announced in our everyday life. The criminal traffic offense expanded due to over movement on streets. The reason is rapid of vehicles. The speed of the vehicles is past the normal speed confine is called speed infringement. In this paper diverse issues are confronted that are given in issue detailing. Every one of these issues are in future with the assistance of the fortification learning issue and advancement issue the changed neural system is contemplated with NN calculations forward Chaining back spread . Omesh Goyal | Chamkour Singh ""A Review on Traffic Signal Identification"" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-4 , June 2019, URL: https://www.ijtsrd.com/papers/ijtsrd23557.pdf
Paper URL: https://www.ijtsrd.com/engineering/computer-engineering/23557/a-review-on-traffic-signal-identification/omesh-goyal
Integration Method of Local-global SVR and Parallel Time Variant PSO in Water...TELKOMNIKA JOURNAL
Flood is one type of natural disaster that can’t be predicted, one of the main causes of flooding is the continuous rain (natural events). In terms of meteorology, the cause of flood is come from high rainfall and the high tide of the sea, resulting in increased the water level. Rainfall and water level analysis in each period, still not able to solve the existing problems. Therefore in this study, the proposed integration method of Parallel Time Variant PSO (PTVPSO) and Local-Global Support Vector Regression (SVR) is used to forecast water level. Implementation in this study combine SVR as regression method for forecast the water level, Local-Global concept take the role for the minimization for the computing time, while PTVPSO used in the SVR to obtain maximum performance and higher accurate result by optimize the parameters of SVR. Hopefully this system will be able to solve the existing problems for flood early warning system due to erratic weather.
This document summarizes a research paper that proposes a new inventory prediction method for supply chain management called BP-GA chaos prediction algorithm. The method uses a backpropagation neural network combined with a genetic algorithm to forecast inventory levels based on chaotic time series analysis. It aims to overcome limitations of traditional chaos prediction approaches. The paper reviews other inventory forecasting research and chaotic prediction methods. It then describes the new hybrid BP-GA method in detail, which establishes a chaotic neural network model optimized through a genetic algorithm. An experiment applying this method to inventory prediction is said to achieve good results.
"Fuzzy based approach for weather advisory system”iosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
This document describes a proposed fuzzy logic-based weather advisory system. It begins with background on how agricultural productivity depends on weather and how weather forecasts can help farmers plan. It then discusses using a fuzzy approach to develop the system given the uncertain nature of weather. The rest of the document reviews related work applying fuzzy logic to weather-related problems and provides details on fuzzy logic and how it can model imprecise systems through membership functions and if-then rules. The overall goal is to develop a weather advisory system to help farmers minimize losses from adverse conditions.
Similar to Improve Interval Optimization of FLR Using Auto-speed Acceleration Algorithm (20)
Amazon products reviews classification based on machine learning, deep learni...TELKOMNIKA JOURNAL
In recent times, the trend of online shopping through e-commerce stores and websites has grown to a huge extent. Whenever a product is purchased on an e-commerce platform, people leave their reviews about the product. These reviews are very helpful for the store owners and the product’s manufacturers for the betterment of their work process as well as product quality. An automated system is proposed in this work that operates on two datasets D1 and D2 obtained from Amazon. After certain preprocessing steps, N-gram and word embedding-based features are extracted using term frequency-inverse document frequency (TF-IDF), bag of words (BoW) and global vectors (GloVe), and Word2vec, respectively. Four machine learning (ML) models support vector machines (SVM), logistic regression (RF), logistic regression (LR), multinomial Naïve Bayes (MNB), two deep learning (DL) models convolutional neural network (CNN), long-short term memory (LSTM), and standalone bidirectional encoder representations (BERT) are used to classify reviews as either positive or negative. The results obtained by the standard ML, DL models and BERT are evaluated using certain performance evaluation measures. BERT turns out to be the best-performing model in the case of D1 with an accuracy of 90% on features derived by word embedding models while the CNN provides the best accuracy of 97% upon word embedding features in the case of D2. The proposed model shows better overall performance on D2 as compared to D1.
Design, simulation, and analysis of microstrip patch antenna for wireless app...TELKOMNIKA JOURNAL
In this study, a microstrip patch antenna that works at 3.6 GHz was built and tested to see how well it works. In this work, Rogers RT/Duroid 5880 has been used as the substrate material, with a dielectric permittivity of 2.2 and a thickness of 0.3451 mm; it serves as the base for the examined antenna. The computer simulation technology (CST) studio suite is utilized to show the recommended antenna design. The goal of this study was to get a more extensive transmission capacity, a lower voltage standing wave ratio (VSWR), and a lower return loss, but the main goal was to get a higher gain, directivity, and efficiency. After simulation, the return loss, gain, directivity, bandwidth, and efficiency of the supplied antenna are found to be -17.626 dB, 9.671 dBi, 9.924 dBi, 0.2 GHz, and 97.45%, respectively. Besides, the recreation uncovered that the transfer speed side-lobe level at phi was much better than those of the earlier works, at -28.8 dB, respectively. Thus, it makes a solid contender for remote innovation and more robust communication.
Design and simulation an optimal enhanced PI controller for congestion avoida...TELKOMNIKA JOURNAL
This document describes using a snake optimization algorithm to tune the gains of an enhanced proportional-integral controller for congestion avoidance in a TCP/AQM system. The controller aims to maintain a stable and desired queue size without noise or transmission problems. A linearized model of the TCP/AQM system is presented. An enhanced PI controller combining nonlinear gain and original PI gains is proposed. The snake optimization algorithm is then used to tune the parameters of the enhanced PI controller to achieve optimal system performance and response. Simulation results are discussed showing the proposed controller provides a stable and robust behavior for congestion control.
Improving the detection of intrusion in vehicular ad-hoc networks with modifi...TELKOMNIKA JOURNAL
Vehicular ad-hoc networks (VANETs) are wireless-equipped vehicles that form networks along the road. The security of this network has been a major challenge. The identity-based cryptosystem (IBC) previously used to secure the networks suffers from membership authentication security features. This paper focuses on improving the detection of intruders in VANETs with a modified identity-based cryptosystem (MIBC). The MIBC is developed using a non-singular elliptic curve with Lagrange interpolation. The public key of vehicles and roadside units on the network are derived from number plates and location identification numbers, respectively. Pseudo-identities are used to mask the real identity of users to preserve their privacy. The membership authentication mechanism ensures that only valid and authenticated members of the network are allowed to join the network. The performance of the MIBC is evaluated using intrusion detection ratio (IDR) and computation time (CT) and then validated with the existing IBC. The result obtained shows that the MIBC recorded an IDR of 99.3% against 94.3% obtained for the existing identity-based cryptosystem (EIBC) for 140 unregistered vehicles attempting to intrude on the network. The MIBC shows lower CT values of 1.17 ms against 1.70 ms for EIBC. The MIBC can be used to improve the security of VANETs.
Conceptual model of internet banking adoption with perceived risk and trust f...TELKOMNIKA JOURNAL
Understanding the primary factors of internet banking (IB) acceptance is critical for both banks and users; nevertheless, our knowledge of the role of users’ perceived risk and trust in IB adoption is limited. As a result, we develop a conceptual model by incorporating perceived risk and trust into the technology acceptance model (TAM) theory toward the IB. The proper research emphasized that the most essential component in explaining IB adoption behavior is behavioral intention to use IB adoption. TAM is helpful for figuring out how elements that affect IB adoption are connected to one another. According to previous literature on IB and the use of such technology in Iraq, one has to choose a theoretical foundation that may justify the acceptance of IB from the customer’s perspective. The conceptual model was therefore constructed using the TAM as a foundation. Furthermore, perceived risk and trust were added to the TAM dimensions as external factors. The key objective of this work was to extend the TAM to construct a conceptual model for IB adoption and to get sufficient theoretical support from the existing literature for the essential elements and their relationships in order to unearth new insights about factors responsible for IB adoption.
Efficient combined fuzzy logic and LMS algorithm for smart antennaTELKOMNIKA JOURNAL
The smart antennas are broadly used in wireless communication. The least mean square (LMS) algorithm is a procedure that is concerned in controlling the smart antenna pattern to accommodate specified requirements such as steering the beam toward the desired signal, in addition to placing the deep nulls in the direction of unwanted signals. The conventional LMS (C-LMS) has some drawbacks like slow convergence speed besides high steady state fluctuation error. To overcome these shortcomings, the present paper adopts an adaptive fuzzy control step size least mean square (FC-LMS) algorithm to adjust its step size. Computer simulation outcomes illustrate that the given model has fast convergence rate as well as low mean square error steady state.
Design and implementation of a LoRa-based system for warning of forest fireTELKOMNIKA JOURNAL
This paper presents the design and implementation of a forest fire monitoring and warning system based on long range (LoRa) technology, a novel ultra-low power consumption and long-range wireless communication technology for remote sensing applications. The proposed system includes a wireless sensor network that records environmental parameters such as temperature, humidity, wind speed, and carbon dioxide (CO2) concentration in the air, as well as taking infrared photos.The data collected at each sensor node will be transmitted to the gateway via LoRa wireless transmission. Data will be collected, processed, and uploaded to a cloud database at the gateway. An Android smartphone application that allows anyone to easily view the recorded data has been developed. When a fire is detected, the system will sound a siren and send a warning message to the responsible personnel, instructing them to take appropriate action. Experiments in Tram Chim Park, Vietnam, have been conducted to verify and evaluate the operation of the system.
Wavelet-based sensing technique in cognitive radio networkTELKOMNIKA JOURNAL
Cognitive radio is a smart radio that can change its transmitter parameter based on interaction with the environment in which it operates. The demand for frequency spectrum is growing due to a big data issue as many Internet of Things (IoT) devices are in the network. Based on previous research, most frequency spectrum was used, but some spectrums were not used, called spectrum hole. Energy detection is one of the spectrum sensing methods that has been frequently used since it is easy to use and does not require license users to have any prior signal understanding. But this technique is incapable of detecting at low signal-to-noise ratio (SNR) levels. Therefore, the wavelet-based sensing is proposed to overcome this issue and detect spectrum holes. The main objective of this work is to evaluate the performance of wavelet-based sensing and compare it with the energy detection technique. The findings show that the percentage of detection in wavelet-based sensing is 83% higher than energy detection performance. This result indicates that the wavelet-based sensing has higher precision in detection and the interference towards primary user can be decreased.
A novel compact dual-band bandstop filter with enhanced rejection bandsTELKOMNIKA JOURNAL
In this paper, we present the design of a new wide dual-band bandstop filter (DBBSF) using nonuniform transmission lines. The method used to design this filter is to replace conventional uniform transmission lines with nonuniform lines governed by a truncated Fourier series. Based on how impedances are profiled in the proposed DBBSF structure, the fractional bandwidths of the two 10 dB-down rejection bands are widened to 39.72% and 52.63%, respectively, and the physical size has been reduced compared to that of the filter with the uniform transmission lines. The results of the electromagnetic (EM) simulation support the obtained analytical response and show an improved frequency behavior.
Deep learning approach to DDoS attack with imbalanced data at the application...TELKOMNIKA JOURNAL
A distributed denial of service (DDoS) attack is where one or more computers attack or target a server computer, by flooding internet traffic to the server. As a result, the server cannot be accessed by legitimate users. A result of this attack causes enormous losses for a company because it can reduce the level of user trust, and reduce the company’s reputation to lose customers due to downtime. One of the services at the application layer that can be accessed by users is a web-based lightweight directory access protocol (LDAP) service that can provide safe and easy services to access directory applications. We used a deep learning approach to detect DDoS attacks on the CICDDoS 2019 dataset on a complex computer network at the application layer to get fast and accurate results for dealing with unbalanced data. Based on the results obtained, it is observed that DDoS attack detection using a deep learning approach on imbalanced data performs better when implemented using synthetic minority oversampling technique (SMOTE) method for binary classes. On the other hand, the proposed deep learning approach performs better for detecting DDoS attacks in multiclass when implemented using the adaptive synthetic (ADASYN) method.
The appearance of uncertainties and disturbances often effects the characteristics of either linear or nonlinear systems. Plus, the stabilization process may be deteriorated thus incurring a catastrophic effect to the system performance. As such, this manuscript addresses the concept of matching condition for the systems that are suffering from miss-match uncertainties and exogeneous disturbances. The perturbation towards the system at hand is assumed to be known and unbounded. To reach this outcome, uncertainties and their classifications are reviewed thoroughly. The structural matching condition is proposed and tabulated in the proposition 1. Two types of mathematical expressions are presented to distinguish the system with matched uncertainty and the system with miss-matched uncertainty. Lastly, two-dimensional numerical expressions are provided to practice the proposed proposition. The outcome shows that matching condition has the ability to change the system to a design-friendly model for asymptotic stabilization.
Implementation of FinFET technology based low power 4×4 Wallace tree multipli...TELKOMNIKA JOURNAL
Many systems, including digital signal processors, finite impulse response (FIR) filters, application-specific integrated circuits, and microprocessors, use multipliers. The demand for low power multipliers is gradually rising day by day in the current technological trend. In this study, we describe a 4×4 Wallace multiplier based on a carry select adder (CSA) that uses less power and has a better power delay product than existing multipliers. HSPICE tool at 16 nm technology is used to simulate the results. In comparison to the traditional CSA-based multiplier, which has a power consumption of 1.7 µW and power delay product (PDP) of 57.3 fJ, the results demonstrate that the Wallace multiplier design employing CSA with first zero finding logic (FZF) logic has the lowest power consumption of 1.4 µW and PDP of 27.5 fJ.
Evaluation of the weighted-overlap add model with massive MIMO in a 5G systemTELKOMNIKA JOURNAL
The flaw in 5G orthogonal frequency division multiplexing (OFDM) becomes apparent in high-speed situations. Because the doppler effect causes frequency shifts, the orthogonality of OFDM subcarriers is broken, lowering both their bit error rate (BER) and throughput output. As part of this research, we use a novel design that combines massive multiple input multiple output (MIMO) and weighted overlap and add (WOLA) to improve the performance of 5G systems. To determine which design is superior, throughput and BER are calculated for both the proposed design and OFDM. The results of the improved system show a massive improvement in performance ver the conventional system and significant improvements with massive MIMO, including the best throughput and BER. When compared to conventional systems, the improved system has a throughput that is around 22% higher and the best performance in terms of BER, but it still has around 25% less error than OFDM.
Reflector antenna design in different frequencies using frequency selective s...TELKOMNIKA JOURNAL
In this study, it is aimed to obtain two different asymmetric radiation patterns obtained from antennas in the shape of the cross-section of a parabolic reflector (fan blade type antennas) and antennas with cosecant-square radiation characteristics at two different frequencies from a single antenna. For this purpose, firstly, a fan blade type antenna design will be made, and then the reflective surface of this antenna will be completed to the shape of the reflective surface of the antenna with the cosecant-square radiation characteristic with the frequency selective surface designed to provide the characteristics suitable for the purpose. The frequency selective surface designed and it provides the perfect transmission as possible at 4 GHz operating frequency, while it will act as a band-quenching filter for electromagnetic waves at 5 GHz operating frequency and will be a reflective surface. Thanks to this frequency selective surface to be used as a reflective surface in the antenna, a fan blade type radiation characteristic at 4 GHz operating frequency will be obtained, while a cosecant-square radiation characteristic at 5 GHz operating frequency will be obtained.
Reagentless iron detection in water based on unclad fiber optical sensorTELKOMNIKA JOURNAL
A simple and low-cost fiber based optical sensor for iron detection is demonstrated in this paper. The sensor head consist of an unclad optical fiber with the unclad length of 1 cm and it has a straight structure. Results obtained shows a linear relationship between the output light intensity and iron concentration, illustrating the functionality of this iron optical sensor. Based on the experimental results, the sensitivity and linearity are achieved at 0.0328/ppm and 0.9824 respectively at the wavelength of 690 nm. With the same wavelength, other performance parameters are also studied. Resolution and limit of detection (LOD) are found to be 0.3049 ppm and 0.0755 ppm correspondingly. This iron sensor is advantageous in that it does not require any reagent for detection, enabling it to be simpler and cost-effective in the implementation of the iron sensing.
Impact of CuS counter electrode calcination temperature on quantum dot sensit...TELKOMNIKA JOURNAL
In place of the commercial Pt electrode used in quantum sensitized solar cells, the low-cost CuS cathode is created using electrophoresis. High resolution scanning electron microscopy and X-ray diffraction were used to analyze the structure and morphology of structural cubic samples with diameters ranging from 40 nm to 200 nm. The conversion efficiency of solar cells is significantly impacted by the calcination temperatures of cathodes at 100 °C, 120 °C, 150 °C, and 180 °C under vacuum. The fluorine doped tin oxide (FTO)/CuS cathode electrode reached a maximum efficiency of 3.89% when it was calcined at 120 °C. Compared to other temperature combinations, CuS nanoparticles crystallize at 120 °C, which lowers resistance while increasing electron lifetime.
In place of the commercial Pt electrode used in quantum sensitized solar cells, the low-cost CuS cathode is created using electrophoresis. High resolution scanning electron microscopy and X-ray diffraction were used to analyze the structure and morphology of structural cubic samples with diameters ranging from 40 nm to 200 nm. The conversion efficiency of solar cells is significantly impacted by the calcination temperatures of cathodes at 100 °C, 120 °C, 150 °C, and 180 °C under vacuum. The fluorine doped tin oxide (FTO)/CuS cathode electrode reached a maximum efficiency of 3.89% when it was calcined at 120 °C. Compared to other temperature combinations, CuS nanoparticles crystallize at 120 °C, which lowers resistance while increasing electron lifetime.
A progressive learning for structural tolerance online sequential extreme lea...TELKOMNIKA JOURNAL
This article discusses the progressive learning for structural tolerance online sequential extreme learning machine (PSTOS-ELM). PSTOS-ELM can save robust accuracy while updating the new data and the new class data on the online training situation. The robustness accuracy arises from using the householder block exact QR decomposition recursive least squares (HBQRD-RLS) of the PSTOS-ELM. This method is suitable for applications that have data streaming and often have new class data. Our experiment compares the PSTOS-ELM accuracy and accuracy robustness while data is updating with the batch-extreme learning machine (ELM) and structural tolerance online sequential extreme learning machine (STOS-ELM) that both must retrain the data in a new class data case. The experimental results show that PSTOS-ELM has accuracy and robustness comparable to ELM and STOS-ELM while also can update new class data immediately.
Electroencephalography-based brain-computer interface using neural networksTELKOMNIKA JOURNAL
This study aimed to develop a brain-computer interface that can control an electric wheelchair using electroencephalography (EEG) signals. First, we used the Mind Wave Mobile 2 device to capture raw EEG signals from the surface of the scalp. The signals were transformed into the frequency domain using fast Fourier transform (FFT) and filtered to monitor changes in attention and relaxation. Next, we performed time and frequency domain analyses to identify features for five eye gestures: opened, closed, blink per second, double blink, and lookup. The base state was the opened-eyes gesture, and we compared the features of the remaining four action gestures to the base state to identify potential gestures. We then built a multilayer neural network to classify these features into five signals that control the wheelchair’s movement. Finally, we designed an experimental wheelchair system to test the effectiveness of the proposed approach. The results demonstrate that the EEG classification was highly accurate and computationally efficient. Moreover, the average performance of the brain-controlled wheelchair system was over 75% across different individuals, which suggests the feasibility of this approach.
Adaptive segmentation algorithm based on level set model in medical imagingTELKOMNIKA JOURNAL
For image segmentation, level set models are frequently employed. It offer best solution to overcome the main limitations of deformable parametric models. However, the challenge when applying those models in medical images stills deal with removing blurs in image edges which directly affects the edge indicator function, leads to not adaptively segmenting images and causes a wrong analysis of pathologies wich prevents to conclude a correct diagnosis. To overcome such issues, an effective process is suggested by simultaneously modelling and solving systems’ two-dimensional partial differential equations (PDE). The first PDE equation allows restoration using Euler’s equation similar to an anisotropic smoothing based on a regularized Perona and Malik filter that eliminates noise while preserving edge information in accordance with detected contours in the second equation that segments the image based on the first equation solutions. This approach allows developing a new algorithm which overcome the studied model drawbacks. Results of the proposed method give clear segments that can be applied to any application. Experiments on many medical images in particular blurry images with high information losses, demonstrate that the developed approach produces superior segmentation results in terms of quantity and quality compared to other models already presented in previeous works.
Automatic channel selection using shuffled frog leaping algorithm for EEG bas...TELKOMNIKA JOURNAL
Drug addiction is a complex neurobiological disorder that necessitates comprehensive treatment of both the body and mind. It is categorized as a brain disorder due to its impact on the brain. Various methods such as electroencephalography (EEG), functional magnetic resonance imaging (FMRI), and magnetoencephalography (MEG) can capture brain activities and structures. EEG signals provide valuable insights into neurological disorders, including drug addiction. Accurate classification of drug addiction from EEG signals relies on appropriate features and channel selection. Choosing the right EEG channels is essential to reduce computational costs and mitigate the risk of overfitting associated with using all available channels. To address the challenge of optimal channel selection in addiction detection from EEG signals, this work employs the shuffled frog leaping algorithm (SFLA). SFLA facilitates the selection of appropriate channels, leading to improved accuracy. Wavelet features extracted from the selected input channel signals are then analyzed using various machine learning classifiers to detect addiction. Experimental results indicate that after selecting features from the appropriate channels, classification accuracy significantly increased across all classifiers. Particularly, the multi-layer perceptron (MLP) classifier combined with SFLA demonstrated a remarkable accuracy improvement of 15.78% while reducing time complexity.
Literature Review Basics and Understanding Reference Management.pptxDr Ramhari Poudyal
Three-day training on academic research focuses on analytical tools at United Technical College, supported by the University Grant Commission, Nepal. 24-26 May 2024
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
ACEP Magazine edition 4th launched on 05.06.2024Rahul
This document provides information about the third edition of the magazine "Sthapatya" published by the Association of Civil Engineers (Practicing) Aurangabad. It includes messages from current and past presidents of ACEP, memories and photos from past ACEP events, information on life time achievement awards given by ACEP, and a technical article on concrete maintenance, repairs and strengthening. The document highlights activities of ACEP and provides a technical educational article for members.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
International Conference on NLP, Artificial Intelligence, Machine Learning an...gerogepatton
International Conference on NLP, Artificial Intelligence, Machine Learning and Applications (NLAIM 2024) offers a premier global platform for exchanging insights and findings in the theory, methodology, and applications of NLP, Artificial Intelligence, Machine Learning, and their applications. The conference seeks substantial contributions across all key domains of NLP, Artificial Intelligence, Machine Learning, and their practical applications, aiming to foster both theoretical advancements and real-world implementations. With a focus on facilitating collaboration between researchers and practitioners from academia and industry, the conference serves as a nexus for sharing the latest developments in the field.
Harnessing WebAssembly for Real-time Stateless Streaming PipelinesChristina Lin
Traditionally, dealing with real-time data pipelines has involved significant overhead, even for straightforward tasks like data transformation or masking. However, in this talk, we’ll venture into the dynamic realm of WebAssembly (WASM) and discover how it can revolutionize the creation of stateless streaming pipelines within a Kafka (Redpanda) broker. These pipelines are adept at managing low-latency, high-data-volume scenarios.
2. TELKOMNIKA ISSN: 1693-6930
Improve Interval Optimization of FLR Using.... (Yusuf Priyo Anggodo)
1725
overcome the uncertainty and deliver high accuracy. This research will use the inflation dataset
on 2003-2017 from Indonesian Bank. The previous study has been done by Sari et al (2016)
about inflation forecasting using neural network model [24]. That model will be compared with
fuzzy time series forecasting model which is proposed by Anggodo and Mahmudy (2017) which
have succeeded to do the forecasting of the minimum income of life [25]. In that research is
done forecasting using FLR method and to optimise the forecasting result is done optimization
of interval value using ACPSO. The optimization of FLR interval value is helpful to get the right
forecasting of value point, so that can obtain a high accuracy result. Hybrid ACPSO is the right
combination. Automatic clustering can form the right amount of intervals [26], [15], [27]. While
the PSO can optimise the interval limit value so that can be obtained the flexible interval so that
it can deliver results forecasting that has a high degree of accuracy [21], [28], [29].
Interval optimization using ACPSO has given good results. However, it is still insufficient
and the computational needs a long time. The process of finding the interval value with PSO is
often get a trouble, especially in the process of finding the global solution because of PSO are
too focused on the process of finding the solution in their local area. Moreover, the process of
finding the solution with PSO with maximum results needs some great parameter value so that
the computational process is long enough [25]. It is the main problem to be solved first the
problem of long computing time, second the search for a interval solution that has not been
maximized. The more complex the value of forecasting it will be more difficult in finding the
maximum interval solution and also to get the maximum interval solution definitely takes a long
time computing. In this research is proposed the improvement of interval optimum to solve the
problem of finding the broader solution and the computational is relatively shorter with new
method that is auto-speed acceleration algorithm. The proposed method hopes to solve the
problem i.e. finding a solution of the interval that is maximum and the computation time is fast.
Auto-speed acceleration algorithm is a new method in swarm intelligence or interval
optimization that has never been proposed in any paper [22], [30-33].
Auto-speed acceleration algorithm was inspired by the concept of one-dimensional
motion on classical physics developed by Galileo Galilei and Issac Newton [34],[35]. The motion
of one dimension is a particle that moves from one position to another position, where the
displacement is very influenced by the acceleration. The acceleration of the particles can affect
the speed of the particle so that it can control the range of particle displacement. Auto-speed
acceleration algorithm settings can control the displacement of the particle to find solutions that
are more optimal. Based on this it is proposed new method with the name of auto-speed
acceleration algorithm. Besides the managing of the particle displacement right, the
determination of particle movement is also very influence toward the process of finding the
solution to get out from the local finding area so that can find the maximum solution. Auto-speed
acceleration algorithm settings can find a solution quickly because it can set the range of
particle displacement. Auto-speed acceleration algorithm developed in a simple and in just a
few lines of code of computer programs. In addition to the simple process of each auto speed
acceleration process, the algorithm requires only a few primitive mathematical operators from
fundamental kinematic motion in one dimension. Focus of auto-speed acceleration algorithm is
doing the automatic settings on the acceleration of motion of each particle acceleration process.
the change will affect changes in speed, direction, and distance [36]. This research will show the
results of the implementation of the auto-speed acceleration algorithm for improvement of
optimization model of hybrid ACPSO interval submitted by Anggodo and Mahmudy (2017) [25]
on the issue of inflation forecasting.
2. Fuzzy Logical Relationships
This section will discuss on forecasting method of fuzzy logical relationships and
optimise the value of interval using the hybrid ACPSO. There are several stages of doing
forecasting time series data using fuzzy logical relationships [22], [29].
Step 1: the first stage does optimization using interval value hybrid ACPSO and do the
calculation of the middle score of each interval.
Step 2: once the retrieved value interval R1, R2, R3, …, Rn. next do the formation of fuzzy Ti set, It
was labelled as follows:
T1=1/R1+0.5/R2+0/R3+0/R4+...+0/Rn-1+0/Rn,
3. ISSN: 1693-6930
TELKOMNIKA Vol. 16, No. 4, August 2018: 1724-1734
1726
T2=0.5/R1+1/R2+0.5/R3+0/R4 +...+0/Rn-1+0/Rn,
T3=0/R1+0.5/R2+1/R3+0.5/R4+...+0/Rn-1+0/Rn,
.
.
.
Tn=0/R1+0/R2+0/R3+0/R4+...+0.5/Rn-1+1/Rn.
Step 3: in this stages do the fuzzy process in each datum, if it is included in Ri interval a set of
fuzzy Tj should be formed.
Step 4: these stages form the fuzzy logical relationship of a fuzzy set from each datum in the
time series data, so it will be formed datum year t and t+1 become the fuzzy logical relationships
Tt → Tt+1. Once formed the next step is doing the grouping based on the same current state.
Step 5: the last stage is doing the forecasting, by doing some forecasting principles as follows:
Principles 1: when the value of fuzzy year t is TA and there is the current state of the fuzzy
logical relationships groups TA → TB than to do forecasting of year t+1 or MB, which is
obtained from PB which is the midpoint of this RB interval.
Principles 2: when the value of the Fuzzy year t is Tj and there is a current state in fuzzy
logical relationships groups Ti(x1) → TB1(x1), TB2(x2), …, TBp(xn) then to do the forecasting of year
t+1 as follows:
, (1)
Principles 3: when the value of Fuzzy year t is Ti and there is a current state in fuzzy
logical relationships group Tj → #, then for the forecasting of year t+1 is the Mj, which is the
midpoint of the interval Rj.
2.1. Interval Optimization
Maximum result is an important thing to achieve in doing forecasting. On fuzzy logical
relationships, there are several parameters that affect the results of forecasting, interval is one
of the thing which has big influence because of the taking of forecasting value of fuzzy logical
relationships is retrieved from interval midpoint. Hybrid ACPSO is an optimum interval which
has maximum forecasting result [25]. Automatic Clustering is used to find the right distribution of
interval value so that can find the best forecasting value [26] [37], [38]. While the particle swarm
optimization is used to make the boundary of each interval optimum so that can find the most
maximum interval boundary while doing the forecasting [39], [28], [29]. The first step is doing the
clustering with automatic clustering method, and then makes the interval value optimum with
particle swarm optimization. The step of automatic clustering as follows:
Step 1: the first step is sort ascending the time series data S1, S2, …, Sn. and then calculate the
average data value avg_diff, as follows:
avg_diff=
∑
, (2)
Step 2: The second stage is forming the cluster from time series data. The researcher should
form the first datum become the new cluster and the next datum follow the principle as follows:
Principles 1: assume that there are cluster and datum {S1}, S2, S3, ..., Si, ..., Sn. if the
value S2 ≤ avg_diff then input the S2 into the cluster that consist of S1. If not then create a
new cluster.
Principles 2: assume that there are cluster and datum {S1}, …, {…, Si}, {Sj}, Sk, …, Sn. if
the value Sk – Sj ≤ avg_diff and Sk – Sj ≤ Sj - Si then input the Sk into cluster that consist of
Sj. if it does not like that then create a new cluster with Sk.
Principles 3: assume that there are cluster and datum {S1}, …, {…}, {…, Si}, Sj, …, Sn. if
the value Sj – Si ≤ avg_diff and Sj - Si ≤ clus_diff then input the Sj into cluster that consist of
Sj. if does not like that create a new cluster with Sj datum. clust_diff is average value in
every cluster, which is obtained as follows:
4. TELKOMNIKA ISSN: 1693-6930
Improve Interval Optimization of FLR Using.... (Yusuf Priyo Anggodo)
1727
clus_diff=
∑
, (3)
Step 3: the third stage is adjust the contents of each cluster in accordance with the following
principles:
Principles 1: If in one cluster there are more than two data then preserve the smallest and
largest datum.
Principles 2: If in one cluster there are two data preserve all the data.
Principles 3: If in one cluster there is a datum Sj delete it and add the datum Sj - avg_diff
and Sj+avg_diff in accordance with the following situations:
Case 1: If the first cluster, then replaces the datum Sj - avg_diff with datum Sj.
Case 2: If the last cluster, then replaces the datum Sj+avg_diff with datum Sj.
Case 3: If the value of the datum Sj - avg_diff smaller than the attendance cluster then
principles 3 does not apply.
Step 4: assuming there is a cluster as follows:
{S1, S2}, {S3, S4}, {S5, S6}, ..., {Sr}, {Ss, St}, ..., {Sn-1, Sn}.
Do the changes the cluster be the intervals in accordance with the following principles:
4.1 First changes the cluster {S1, S2} into the interval [d1, d2).
4.2 If the current interval [di, dj) and current cluster {dk, dl}, then:
(1) if dj ≥ dk, then the interval form is [dj, dl). Interval [dj, dl) then now the fcurrent interval
and next cluster is {dm, dn) becomes the current cluster.
(2) if dj < dk then change the current cluster {dk, dl} becomes interval [dk, dl) and create a
new interval [dj, dk). Then now [dk, dl) becomes current interval and {dm, dn} becomes
current cluster. If now the current interval [di, dk) and current cluster {df}, then create
the interval [di, df).
4.3 Repeat steps 4.1 and 4.2 until the whole cluster becomes the interval.
Step 5: changes the intervals become p sub-interval, where p ≥ 1.
To get optimal results the value of p then the researcher have to do testing.
fter completing the formation of intervals the next step is optimise the value of each
interval using the PSO. PSO is a method of meta-heuristics which aims at seeking the best
solution [40]. The particle is represented in the form of real code, where a particle is the value of
the bottom and top boundary from the entire interval. Every solution quality is represented in the
form of cost, when the cost getting smaller and then the solution will be the better to use and
vice versa. Cost is the error value of a solution which is obtained from the root mean sequence
error (RMSE) [24]. RMSE compares the actual data with the data of forecasting using fuzzy
logical relationships with interval values represented in the particle.
RMSE=√
∑
, (4)
PSO work by spreading each particle to the entire search space randomly, then the particles will
move to search solutions. The movement of the PSO are influenced by the speed; there is a
wide variety of modelling in calculation speed. The research proposed Eberhart and Shi (2000)
provide maximum results with give an attention to local and global search [41].
vt+1=w.vt+c1.rand().(pBest – xt)+c2.rand().(gBest – xt), (5)
The speed of vt of each dimension in each particle is influenced by the inertial weight value, the
local value of each best particle in pBest, and the best global value in entire particle in gBest.
Inertia Weight (w) value is a model proposed by Ratnaweera et al (2004) to get a more precise
result, inertia weight do the calculation for each iteration [42].
w=(wmax – wmin). +wmin, (6)
The value of wmax and wmin is the largest and smallest inertia weight parameter while the values
of Iterationmax and Iterationt is the maximum iteration value which have been specified and much
looping right now. The using of this inertia weight will be very helpful in searching solutions with
5. ISSN: 1693-6930
TELKOMNIKA Vol. 16, No. 4, August 2018: 1724-1734
1728
better results [43]. After the retrieved value vt+1, then do the change position of each particle in
each dimension.
xt+1=xt+vt+1, (7)
The process of calculation of the value of cost begins, calculation speed, change the position of
the particle will continue to be repeated until the iterations done. So that the output of the PSO
is the gBest value that is the best solution to use the interval value fuzzy logical relationships in
doing forecasting.
3. Motion in One Dimension
The movement is the displacement of the particle from one point to another; the
distance of the displacement which is achieved by the particle is influenced by the speed [43].
Every movement in the concept of one dimension is always bounded, does the particle move to
the right or to the left [44]. Therefore, the speed is made of from the distance of displacement
particle position per time unit which is taken [35]. When the particle velocity is constant then the
distance of movement of particles are not changes. In a movement usually changes the speed
and size, that usually called the acceleration [45], so the acceleration a is a change of the speed
divided with the time difference.
a= , (8)
When the speed of the particle changes higher than the previous time, it called that the
acceleration is positive, so is otherwise when the particle's speed decreased, it called to be
negative acceleration [45]. The concept of gravitational acceleration conducted by Galileo
showed that all objects on Earth have the same acceleration [46]. In addition, the application of
Newton's II laws of motion in one dimension indicates that the acceleration is not constant or
changes [41]. Whereas the movement of an object in the concept of one dimension has the
same concept, so the same acceleration is when the object is moving from time t=0 until a
certain time t [44]. When acceleration is constant then use equations 8, it is written as follows.
=at
The movement of the object starts with speed and stops in a speed of and the movement
of the objects move during t. with a value of a constant acceleration a then formed equation to
find speed in last time [44].
=at
= at, (9)
When there is a change of speed at a time, then the average speed is obtained from a number
of initial and final speeds divided with how long the time taken. Therefore, the average speed of
ṽ as follows.
ṽ= , (10)
The displacement of a particle is the average speed multiplied by time in constant
acceleration [35], so the equation is formed as follows.
s=ṽt
s= .t, (11)
Based on equation 11 can be lowered into three general equations commonly used in a motion
of one dimension.
6. TELKOMNIKA ISSN: 1693-6930
Improve Interval Optimization of FLR Using.... (Yusuf Priyo Anggodo)
1729
3.1. Auto-speed Acceleration Algorithm
The concept of auto-speed acceleration algorithm was inspired by the process of the
movement of objects in the one dimension field. Where an object, in this case, is the particle
moves in accordance with great speed that established. Speed is affected by acceleration and
travel time. Yet, the acceleration is the most significant variable in shaping the value of speed.
Based on this proposed a new method of auto-speed acceleration algorithm, with the concept of
iteration such as heuristic methods in General. Focus on auto-speed acceleration algorithm for
each iteration needs an acceleration managing that based on the condition.
The condition is seen from presents and previous of cost position value, the smaller
cost value means that increasingly the solution was given. When the cost value is smaller, the
decrease of acceleration value is needed. Whereas, when the cost value is greater, the
upgrading of acceleration value is needed. In the first condition the reduction of acceleration is
needed to decrease the speed value, this thing is identified because of the position with small
cost value typically has a tendency of neighbors who have similar properties.
The second condition is done to the particle can be out of the premises so that the
position can do a global search for a position that has smaller cost so that it can provide
solutions that are more optimally. Those two condition auto-speed acceleration algorithm is a
balanced condition in searching. The decrease of speed value in reducing the speed values so
that the particle can exploit the local area. While the upgrade of acceleration value is followed
by speed value so that the particle can exploit the entire searching area. On Figure 1 shows the
stages of the auto-speed acceleration algorithm starts from the process of initialization particles
and some parameters in auto-speed acceleration algorithm.
Figure 1. Procedure of auto-speed acceleration algorithm
The initial position of the particle that is raised is used as the starting point of the
movement in the search for solutions. Whereas the parameters initialized, such as: the value of
the acceleration a, initial speed , iteration, the time t, the conditional q, and the direction of the
particle. The value of the acceleration is getting from the upper and lower limit of the particle
dimensions and divided as much as r, where r is a variable which is affecting the determining of
acceleration value a. The initial speed is initialized with 0. Iterations are assigned in advance
to limit how long the particles doing the movement. The time here is time which is used to
determine the speed used by the particles. While the direction of the particle is used to
determine the direction of particle movement. The movement is direction is resurrected in
opposite direction from the initial position, it is to be able to maximise the search solution. So
that the direction of p movement is obtained from particle conditions xp as follows.
if (xpi > 0) then
pi=1
else then
pi=-1
from these conditions will be obtained a value direction which is turning back from early position.
After the resurrected parameter auto-speed acceleration algorithm is done, then input in
iteration stage. The repetition is done until the condition is stop; there are two conditions for the
looping stopped. The first condition when the repetition is as many as the iterations. The second
condition is when the final speed value is less than and equal to zero is the main process.
7. ISSN: 1693-6930
TELKOMNIKA Vol. 16, No. 4, August 2018: 1724-1734
1730
There is some auto-speed acceleration algorithm in repetition. First, do the calculation of the
value of the speed in each particle. The speed of each dimension in a particle is obtained using
the equation 9, where the variable of the acceleration r and the number of time t defined in
advance, but to get the maximum results need to do testing. In addition, there is initial velocity
value which is initialized equal with zero. After the velocity of dimension of each particle is
retrieved then the changing of the distance value which is resurrected from the equation is done
(11). Change the position of each particle dimension by doing the sums starting position with a
distance that has been calculated. However, also consider the direction of movement from the
beginning, as follows.
xi=x0i+pi.si, (12)
Based on these equations will be obtained a new position with the opposite direction with a
starting position, this treatment is done so that the particles could be out of the local search so
the researcher can find the positions that are more optimally. Then, do the evaluation of the new
position of the particle XP. Evaluation is doing the calculation on the value of cost c1, where the
value of the cost derived from the RMSE between actual data and forecasting results in
equation 4. Forecasting results obtained from the results of the method of fuzzy logical
relationships. After get the value of the cost the next step is the condition for the determination
of the new acceleration value.
The first condition when the value cost c1 for the current position is less than the cost c0
of the previous position, then carried out a reduction in the acceleration value as much as the
acceleration divided by q, where q is a parameter whose value has been set in advance. The
cost value is smaller means that the present position is better than the previous position, so that
the reduction of acceleration value aims to narrow the distance of displacement position. This
treatment done, because of the position which has maximum solution value then its neighbors
are likely has the maximum solution. While for the second condition when the cost value greater
than the previous position then will be done the addition in acceleration value, this thing
undertaken to make the distance of movement position wider. This treatment is intended to be
able to get out of the local area so better solutions can be achieved. After that repeat the
process of speed update, distance update, position update, evaluation and condition until the
repetition condition stop or the speed v1 is less than 0. The result from auto-speed acceleration
algorithm process is a particle position which has better solution value.
3.2. Improve Interval Optimization
This research proposes improvement the interval value on fuzzy logical relationships.
Previous research has been done using the interval value optimization ACPSO [25]. This
research will do the improvement on each iteration of the particle value of PSO using auto-
speed acceleration algorithm. This is because of the PSO is often stuck at local search so it is
difficult to find a position that provides the maximum solution, so that it needs to do the
improvements optimization model that focuses on global search. Auto-speed acceleration
algorithm was very helpful in finding solutions through global search. Besides being able to get
out of the local search auto-speed acceleration algorithm can also find a position on the best
local area. Figure 2 shows the process to improve the optimization the interval of fuzzy logical
relationships with auto-speed acceleration algorithm.
Figure 2. Procedure of improve interval optimization
8. TELKOMNIKA ISSN: 1693-6930
Improve Interval Optimization of FLR Using.... (Yusuf Priyo Anggodo)
1731
Auto-speed acceleration algorithm is performed in every iteration in the PSO. The first
resurrected one particle at random from the entire PSO particle to do the process of auto-speed
acceleration algorithm. So it will get a new position that provides a solution that is more
optimally than before.
4. Results and Analysis
4.1. Best Parameter
The dataset used in this research is the inflation data during 2003-2017, the data
obtained from Bank Indonesia. Data for the year 2003-2017 will be used as training data, except
March 2008 to October 2009 because it becomes a testing data [24] shown in Figure 3.
Figure 3. Dataset inflation
The value of interval training on the fuzzy logical relationships is conducted by doing
forecasting in 2003-2014. In the first model the forecasting is done with interval optimization
values using hybrid ACPSO [25] [26]. Based on the test results is obtained the smallest average
value cost from the parameters of the PSO. The parameters which are tested such as the
number of particles, the number of iterations, inertia weight, and control of each speed it is
shown in Table 1.
Table 1. Best of Parameter PSO and Automatic Clustering
Population of particle 500
Number of iterations
Inertia weight (wmin, wmax)
Acceleration coefficient (c1, c2)
Velocity control (vmin, vmax)
Number of Sub interval
450
(0.4, 0.8)
(1, 1)
(-0.8, 0.8)
4
Based on the value of the parameter PSO and automatic clustering forecasting using
fuzzy logical relationships is undertaken and obtained average value cost of 0.94036. The
second model the improvement of interval optimization using the auto-speed acceleration
algorithm to improve the solution is undertaken. Table 2 shows specified value of the parameter
auto-speed acceleration algorithm.
Table 2. Parameter Auto-speed Acceleration Algorithm
Variable of acceleration (r) 7
Number of time (t)
Variable of conditional (q)
1
10
0
5
10
15
20
Jan-03
Oct-03
Jul-04
Apr-05
Jan-06
Oct-06
Jul-07
Apr-08
Jan-09
Oct-09
Jul-10
Apr-11
Jan-12
Oct-12
Jul-13
Apr-14
Jan-15
Oct-15
Jul-16
Data Tranning Data Testing
9. ISSN: 1693-6930
TELKOMNIKA Vol. 16, No. 4, August 2018: 1724-1734
1732
Interval optimization improvements by using auto-speed acceleration algorithm deliver a
more significant result by given the average cost value 0.467, the results of each iteration is
shown in Figure 4.
Figure 4 Result of cost auto-speed acceleration algorithm
Based on Figure 4 shows that in early iterations the premature convergence are already
happen so that a new treatment is done on the value of the parameter of the PSO and
automatic clustering, where population of particle of 10, number of iteration of 5, inertia weight
for 0.4 and 0.8, velocity control for -0.8 and 0.8, variable of acceleration of 7, number of time of
1, and variable of conditional of 10. With the new value in the PSO, the improvement of
optimization is undertaken with auto-speed acceleration algorithm so that the average of cost
value is obtained 0.467. In addition getting the maximum interval value by performing
optimization using hybrid ACPSO and auto-speed accelerate algorithm needs a testing of sub
intervals on automatic clustering which is shown in Table 3. Table 3 shows that the interval
division into p sub-interval very influential towards the results of the retrieval of more accurate
forecasting.
Table 3. Result of Experimental of p
Number
of p
Result of Cost
1st
simulation
2th
simulation
3rd
simulation
4rd
simulation
5rd
simulation
Average
1 0.591211441 0.720663673 0.737593734 0.603191512 0.68461244 0.66745456
- - - - - - - - - - - - - - - - - - - - -
13 0.259013015 0.267060462 0.254073903 0.261033924 0.249326836 0.25810162
14 0.254063746 0.261794058 0.261292075 0.252989863 0.262431558 0.25851426
15 0.324219161 0.329085733 0.295316124 0.322589322 0.298875311 0.31401713
5. Comparison Works
Based on some of the tests the best parameter values obtained from automatic
clustering, PSO, and auto-speed acceleration algorithm. In addition the testing on the dataset
starting in 2015 through 2017 and compared with earlier research raised by Sari and Mahmudy
(2016) about inflation forecasts uses fuzzy logic approach and the neural network shown in
Table 4.
Table 4. Comparison of Method
Method Cost of Testing Computation Time
FIS Sugeno [24] 0.269437532 -
Neural Network [24] 0.203961692 -
Hybrid FLR using ACPSO [25] 0.179394200 77.441 s
Proposed Method 0.106263822 1.245 s
0
0.2
0.4
0.6
0.8
1
1.2
Cost
1 50 100 150 200 250 300 350 400 450 500
auto-speed acceleration algorithm
10. TELKOMNIKA ISSN: 1693-6930
Improve Interval Optimization of FLR Using.... (Yusuf Priyo Anggodo)
1733
Based on the comparisons that have been done shows the model proposed in this
study provide maximum results. This is because of the combination of the proper interval
optimization. Automatic clustering can form the number of the corresponding interval. While the
PSO and auto-speed acceleration algorithm can find the value of the interval which gives
maximum results. Search interval that is balanced, where PSO focused on searching the local
area while auto-speed acceleration algorithm focused on the global search. Otherwise it can
provide result with a fast computing time.
5. Conclusion
The study was submitted interval optimization improvements on FLR using the new
method of auto-speed acceleration algorithm. On the previous interval optimization model is
used the searching interval using ACPSO. The test results show that its auto-speed
acceleration algorithm is very helpful in doing repair search solution so that it can find a solution
more quickly at maximum.
References
[1] Kilian L, Murphy SP. Why Agnostic Sign Restrictions Are Not Enough: Under-standing the Dynamics
of Oil Market VAR Models. Journal of the European Economics Association. 2012; 10(5): 1166-1188.
[2] Lippi F, Nobili A. Oil and the Macroeconomy: A Quantitative Structural Analysis. Journal of the
European Economic Association. 2012; 10(5): 1059-1083.
[3] Charnovoki V, Dolado JJ. The Effects of Global Shocks on Small Commodity-Exporting Economies
Lessons from Canada. American Economic Journal: Macroeconomics. 2014; 6(2): 207-237.
[4] Cavallo A, Crucues G, Perez-truglia R. Learning from Potentially Biased Statistics. Brookings Papers
on Economic Activity. 2016: 59-92.
[5] Stoblein M, Kanet JJ, Gorman M, Minner S. Time-phased Safety Stocks Planning and its Financial
Impacts: Empirical Evidence Based on European Economics Data. Int. J. Production Economics.
2014; 149: 47-55.
[6] Ascari G, Sbordone AM. The Macroeconomics of Trend Inflation. Journal of Economic Literature.
2014; 52(3): 679-739.
[7] Matteo C, Mojon B. Global Inflation. The Review of Economics and Statistics. 2010; 3(92): 524-535.
[8] Pearce DK. Comparing Survey and Rational Measures of Expected Inflation: Forecast Performance
and Interest Rate Effects. Journal of Money, Credit and Banking. 1979; 11(4): 447-456.
[9] Bermudez C, Dabus CD, Gonzalez GH. Reexamining the Link between Instability and Growth in Latin
America: A Dynamic Panel Data Estimation using K-Median Clusters. 2015; 52(1): 1-23.
[10] Tyers R, Zhang Y, Cheong TS. China: A New Model for Growth and Development. In: Garnaut R,
Fang C, Song L. China’s Saving and Global Economic Performance. China: ANU Press. 2013: 97-
124.
[11] Barajas A, Steiner R, Villar L, Pabon C. Singular Focus or Multiple Objectives? What the Sata Tell Us
about Inflation Targeting in Latin America. Economia. 2014; 15(1): 177-213.
[12] Coiion O, Gorodnichenko Y, Wieland, J. The Optimal Inflation Rate in New Keynesian Models:
Should Central Banks Raise Their Inflation Targets in Light of the Zero Lower Bound?. Review of
Economic Studies. 2012; 79: 1371-1406.
[13] Trott D. Australia and Zero Lower Bound on Interest Rates: Some Monetary Policy Options. A
Journal of Policy Analysis and Reform. 2015; 22(1): 5-19.
[14] Kumar S, Coibon O, Afrouzi H, Gorododnichenko Y. Inflation Targeting Does Not Anchor Inflation
Expectation: Evidence from Firms in New Zealand. Brookings Papers on Economic Activity. 2015:
151-208.
[15] Quadir MM. The Effect of Macroeconomic Variables on Stock Returns on Dhaka Stock Exchange.
International Journal of Economics and Financial Issues. 2012; 2(4): 480-487.
[16] Arlt J, Arltova M. Forecasting of the Annual Inflation Rate in the Unstable Economic Conditions.
Second International Conference on Mathematics and Computers in Sciences and in Industry. 2015:
231-234.
[17] Zhang L, Li J. Inflation Forecasting Using Support Vector Regression. Fourth International Symposium
on Information Science and Engineering. Washington DC, U.S.. 2012: 136-140.
[18] Tang Y, Zhou J. The Performance of PSO-SVM in Inflation Forecasting. 12
th
International Conference
on Service Systems and Service Management (ICSSSM). Guang Zhou, China. 2015.
[19] Inoue A, Kilian L. How Useful is Bagging in Forecasting Economic Time Series? A Case Study of U.S.
Consumer Price Inflation. Journal of American Statistical Association. 2008; 103(482): 511-522.
[20] Hartmann M, Herwartz H, Ulm M. A Comparative Assessment of Alternative ex Ante Measures of
Inflation Uncertainty. International Journal of Forecasting. 2017; 33: 76-89.
11. ISSN: 1693-6930
TELKOMNIKA Vol. 16, No. 4, August 2018: 1724-1734
1734
[21] Chen S, Zhang JP. Time Series Prediction Based on Ensemble ANFIS. Proceedings of the Fourth
International Conference on Machine Learning and Cybernetics. Gaungzhou, China. 2005; 6: 3552-
3556.
[22] Chen SM, Jian WS. Fuzzy Forecasting Based on Two-Factors Second-Order Fuzzy-Trend Logical
Relationship Groups, Similarity Measures and PSO techniques. Information Sciences. 2017; 291-
292: 65-79.
[23] Cheng SH, Chen SM, Jian WS. Fuzzy Time Series Forecasting Based on Fuzzy Logical Relationships
and Similarity Measures. Information Sciences. 2016; 327: 272-287.
[24] Sari NR, Mahmudy WF, Wibawa AP. Backpropagation on Neural Network Method for Inflation Rate
Forecasting in Indonesia. Int. J. Advance Soft Compu. Appl. 2016; 8(3): 70-87.
[25] Anggodo YP, Mahmudy WF. Automatic Clustering and Optimized Fuzzy Logical Relationships for
Minimum Living Needs Forecasting. Journal of Environment Engineering & Sustainable Technology
(JEEST). 2017; 4(1): 1-7.
[26] Anggodo, YP. Mahmudy WF. ‘Peramalan Butuhan Hidup Minimum Menggunaan Automatic Clustering
dan Fuzzy Logical Relationship’. Journal Teknologi Informasi dan Ilmu Komputer (JTIIK). 2016; 3(2):
94-102.
[27] Wang W, Liu X. Fuzzy Forecasting Based on Automatic Clustering and Axiomatic Fuzzy Set
Classification. Information Sciences. 2015; 294: 78-94.
[28] Chen SM, Chen SW. Fuzzy Forecasting Based on Two-Factors Second-Order Fuzzy-Trend Logical
Relationship Groups and the Probabilities of Trends of Fuzzy Logical Relationships. IEEE
Transactions on Cybernetics. 2015; 45(3): 405-417.
[29] Cheng SH. Chen SM, Jian WS. A Novel Fuzzy Time Series Forecasting Method Based on Fuzzy
Logical Relationships and Similarity Measures. IEEE Conference on Systems, Man, and Cybernetics.
Hong Kong. 2015: 2250-2254.
[30] Chen YP, Li Y, Wang G, Zheng YF, Xu Q, Fan JH, Cui XT. A Novel Bacterial Foraging Optimization
Algorithm for Feature Selection. Expert Systems with Application. 2017; 83: 1-17.
[31] Bell N, Oommen BJ. A Novel Abstraction for Swarm Intelligence: Particle Field Optimization.
Autonomous Agent and Multi-Agent Systems. 2017; 31: 362-385.
[32] Sun Y, Yin S, Liu J. Novel DV-hop Method based on Krill Swarm Algorithm used for Wireless Sensor
Network Localization. TELKOMNIKA (Telecommunication Computing Electronics and Control). 2016;
14(4): 1438-1445.
[33] Xiao G, Liu H, Guo Y, Zhou Y. Research on Chaotic Firefly Algorithm and The Application in Optimal
Reactive Power Dispatch. TELKOMNIKA (Telecommunication Computing Electronics and Control).
2017; 15(1).
[34] Giancoli DC. Physics Principles with Applications. 6
th
Edition. New Jersey, US. Pearson Prentice
Education. 2005: 1-946.
[35] Halliday D, Resnick R. Physics for Students of Science and Engineering. Combined Edition. New
York, US, John Wiley & Sons. 1960: 1-1025.
[36] Morgan J. Introduction to University Physics. 1
st
Edition. Boston. Allyn and Bacon. 1963: 513.
[37] Qiu Q, Zhang P, Wang Y. Fuzzy Time Series Forecasting Model Based on Automatic Clustering
Techniques and Generalized Fuzzy Logical Relationship. Mathematical Problems in Engineering.
2015.
[38] Weber RL, White MW, Manning KV. College Physics. 3
rd
Edition. New York, US. McGraw-Hill. 1959:
3-640.
[39] Chen SM, Manalu GMT, Pan JS, Liu HC. Fuzzy Forecasting Based on Two-Factors Second-Order
Fuzzy-Trend Logical Relationship Groups and Particle Swarm Optimization Techniques. IEEE
Transactions on Cybernetics. 2013; 43(3): 1102-1117.
[40] Kennedy J, Eberhart R. Particle Swarm Optimization. IEEE International Conference on Neural
Network. Perth, Western, Australia. 1995: 1942-1948.
[41] Eberhart RC, Shi Y. Comparing Inertia Weights and Constriction Factors in Particle Swarm
Optimization. IEEE Congress on Evolutionary Computation. San Diego, CA, US. 2000: 84-88.
[42] Ratnaweera A, Saman K, Halgamuga. Harry CW. Self-Organizing Hierarchical Particle Swarm
Optimizer with Time-Varying Acceleration Coefficients. IEEE Transactions on Evolutionary
Computation. 2004; 8(3): 240-255.
[43] Cholissodin I, Khusna RA, Wihandika RC. Optimization of Equitable Distributions of Teachers Based
on Geographic Location using General Series Time Variant PSO. 1
st
International Symposium on
Geoinformatics (ISYG). Malang, Indonesia. 2016: 69-75.
[44] Borowitz S, Beiser A. Essentials of Physics. 1
st
Edition. US. Addison-Wesley Publishing Company.
1966: 1-708.
[45] Ford KW. Basic Physics. 1
st
Edition. London. Blaisdell Publishing Company. 1968: 3-941.
[46] Beiser A. The Foundations of Physics. 1
st
Edition. Palo Alto, US. Addison-Wesley Publishing
Company. 1964: 1-594.