• Like
Kz2418571860
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

Kz2418571860

  • 72 views
Published

IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com

IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads

Views

Total Views
72
On SlideShare
0
From Embeds
0
Number of Embeds
0

Actions

Shares
Downloads
1
Comments
0
Likes
0

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. Nageswara Rao Puli, Nagul Shaik, M.Kishore Kumar / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 2, Issue4, July-August 2012, pp.1857-1860 A New Generic Architecture For Time Series Prediction *Nageswara Rao Puli, **Nagul Shaik,***M.Kishore Kumar *Dept. of Computer Science & Engg Nimra Institute of Science and Technology Vijayawada, India **Asst.Professor, Dept. of Computer Science & Engg Nimra Institute of Science and Technology Vijayawada, India ***Professor & HOD Dept. of Computer Science & Engg Nimra Institute of Science and Technology Vijayawada, IndiaAbstract Rapidly evolving businesses generate xt 1 , xt 2 , data values. The goal is to observe ormassive amounts of time-stamped data sequences model the existing data series to enable futureand cause a demand for both univari- ate and unknown data values to be forecasted accurately.multivariate time series forecasting. For such data, Examples of data series include financial data seriestraditional predictive models based on (stocks, indices, rates, etc.), physically observed dataautoregression are often not sufficient to capture series (sunspots, weather, etc.), and mathematicalcomplex non-linear relationships between data series (Fibonacci sequence, integrals ofmultidimensional fea- tures and the time series differential equations, etc.). The phrase “time series”outputs. In order to exploit these relationships for generically refers to any data series, whether or notimproved time series forecasting while also better the data are dependent on a certain time increment.dealing with a wider variety of prediction Throughout the literature, many techniques have beenscenarios, a forecasting system requires a flexible implemented to perform time series forecasting. Thisand generic architecture to accommodate and tune paper will focus on two techniques: neural networksvarious individual predictors as well as and k-nearest-neighbor. This paper will attempt tocombination methods. fill a gap in the abundant neural network time series In reply to this challenge, an architecture forecasting literature, where testing arbitrary neuralfor combined, multilevel time series prediction is networks on arbitrarily complex data series isproposed, which is suitable for many different common, but not very enlightening. This paperuniversal regressors and combination methods. The thoroughly analyzes the responses of specific neuralkey strength of this architecture is its ability to network configurations to artificial data series, wherebuild a diversified ensemble of individual each data series has a specific characteristic. A betterpredictors that form the input to a multilevel understanding of what causes the basic neuralselection and fusion process before the final network to become an inadequate forecastingoptimised output is obtained. Excellent technique will be gained. In addition, the influencegeneralisation ability is achieved due to the highly of data preprocessing will be noted. The forecastingboosted complementarity of indi- vidual models performance of k-nearest-neighbor, which is a muchfurther enforced through crossvalidation-linked simpler forecasting technique, will be compared totraining on exclusive data subsets and ensemble the neural networks’ performance. Finally, bothoutput post-processing. In a sample configuration techniques will be used to forecast a real data series.with basic neural network predictors and a meancombiner, the proposed system has been evaluatedin different scenarios and showed a clear prediction Difficultiesperformance gain. Several difficulties can arise when performing time series forecasting. Depending on the type of data series, a particular difficulty may or mayIndex Terms— Time series forecasting, combining not exist. A first difficulty is a limited quantity ofpredictors, regression, ensembles, neural networks, data. With data series that are observed, limited datadiversity may be the foremost difficulty. For example, given a company’s stock that has been publicly traded for oneIntroduction year, a very limited amount of data are available for Time series forecasting, or time series use by the forecasting technique.prediction, takes an existing series of dataxt n , , xt 2 , xt 1 , xt and forecasts the 1857 | P a g e
  • 2. Nageswara Rao Puli, Nagul Shaik, M.Kishore Kumar / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 2, Issue4, July-August 2012, pp.1857-1860 A second difficulty is noise. Two types of data series.Another application is forecastingnoisy data are (1) erroneous data points and (2) undesirable, yet unavoidable, events to preemptivelycomponents that obscure the underlying form of the lessen their impact. At the time of this writing, thedata series. Two examples of erroneous data are sun’s cycle of storms, called solar maximum, is ofmeasurement errors and a change in measurement concern because the storms cause technologicalmethods or metrics. In this paper, we will not be disruptions on Earth. The sunspots data series, whichconcerned about erroneous data points. An example is data counting dark patches on the sun and is relatedof a component that obscures the underlying form of to the solar storms, shows an eleven-year cycle ofthe data series is an additive high-frequency solar maximum activity, and if accurately modeled,component. The technique used in this paper to can forecast the severity of future activity. Whilereduce or remove this type of noise is the moving solar activity is unavoidable, its impact can beaverage. The data series , xt 4 , xt 3 , xt 2 , xt 1 , xt lessened with appropriate forecasting and proactive action.becomes Finally, many people, primarily in the, ( xt4  xt3  xt2 ) / 3, ( xt3  xt2  xt1 ) / 3, ( xt2  xt1  xt ) / 3 financial markets, would like to profit from timeafter taking a moving average with an interval i of series forecasting. Whether this is viable is mostthree. Taking a moving average reduces the number likely a never-to-be-resolved question. Neverthelessof data points in the series by i  1 . many products are available for financial forecasting. A third difficulty is nonstationarity, data that Difficulties inherent in time series forecasting and thedo not have the same statistical properties (e.g., mean importance of time series forecasting are presentedand variance) at each point in time. A simple next. Then, neural networks and k-nearest-neighborexample of a nonstationary series is the Fibonacci are detailed. Section Error! Reference source notsequence: at every step the sequence takes on a new, found. presents related work. Section Error!higher mean value. The technique used in this paper Reference source not found. gives an applicationto make a series stationary in the mean is first- level description of the test-bed application, and Section Error! Reference source not found.differencing. The data series , xt 3 , xt 2 , xt 1 , xt presents an empirical evaluation of the resultsbecomes obtained with the application.A time series is a, ( xt 2  xt 3 ), ( xt 1  xt 2 ), ( xt  xt 1 ) after sequence of observations of a random variable. Hence, it is a stochasticprocess. Examples include thetaking the first-difference. This usually makes a data monthly demand for a product, the annualseries stationary in the mean. If not, the second- freshmanenrollment in a department of a university,difference of the series can be taken. Taking the first- and the daily volume of flows in a river.Forecastingdifference reduces the number of data points in the time series data is important component of operationsseries by one. research because thesedata often provide the A fourth difficulty is forecasting technique foundation for decision models. An inventory modelselection. From statistics to artificial intelligence, requiresestimates of future demands, a coursethere are myriad choices of techniques. One of the scheduling and staffing model for a universityrequiressimplest techniques is to search a data series for estimates of future student inflow, and a model forsimilar past events and use the matches to make a providing warnings to thepopulation in a river basinforecast. One of the most complex techniques is to requires estimates of river flows for the immediatetrain a model on the series and use the model to make future.Time series analysis provides tools fora forecast. K-nearest-neighbor and neural networks selecting a model that can be used to forecastof futureare examples of the first and second techniques, events. Modeling the time series is a statisticalrespectively. problem. Forecasts are used incomputational procedures to estimate the parameters of a model1) Importance being used to allocatedlimited resources or to Time series forecasting has several describe random processes such as those mentionedimportant applications. One application is preventing above. Timeseries models assume that observationsundesirable events by forecasting the event, vary according to some probability distributionaboutidentifying the circumstances preceding the event, an underlying function of time.styles are built-in;and taking corrective action so the event can be examples of the type styles are provided throughoutavoided. At the time of this writing, the Federal this document and are identified in italic type, withinReserve Committee is actively raising interest rates to parentheses, following the example. PLEASE DOhead off a possible inflationary economic period. NOT RE-ADJUST THESE MARGINS.The Committee possibly uses time series forecastingwith many data series to forecast the inflationaryperiod and then acts to alter the future values of the 1858 | P a g e
  • 3. Nageswara Rao Puli, Nagul Shaik, M.Kishore Kumar / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 2, Issue4, July-August 2012, pp.1857-1860 . Each output layer unit performs the calculation in In this section and the next, subscripts c, p, Equation II.1 on its inputs and transfers the resultand n will identify units in the current layer, the (Oc) to a network output.previous layer, and the next layer, respectively. Equation II.1 Activation function of an output layerWhen the network is run, each hidden layer unit unit.performs the calculation in Error! Reference source Each output layer unit performs the calculation innot found. on its inputs and transfers the result (Oc) Equation II.1 on its inputs and transfers the resultto the next layer of units. (Oc) to a network output. Equation II.2 Activation function of an output layer unit. P  Oc  hOutput  ic , p wc , p  bc  where hOutput( x )  x  p1  Oc is the output of the current output layer unit c, P is the number of units in the previous P  1Oc  hHidden   ic , p wc , p  bc  where hHidden ( x )  hidden layer, ic,p is an input to unit c from the  p1  1  ex previous hidden layer unit p, wc,p is the weight modifying the connection from unit p to unit c, andFig: Forwarding the node values bc is the bias. For this research, hOutput(x) is a linear activation function1Oc is the output of the current hidden layer unit c, P is K-Nearest-Neighbor In contrast to the complexity of the neuraleither the number of units in the previous hidden network forecasting technique, the simpler k-nearest-layer or number of network inputs, ic,p is an input to neighbor forecasting technique is also implementedunit c from either the previous hidden layer unit p or and tested. K-nearest-neighbor is simpler becausenetwork input p, wc,p is the weight modifying the there is no model to train on the data series. Instead,connection from either unit p to unit c or from input p the data series is searched for situations similar to theto unit c, and bc is the bias. current one each time a forecast needs to be made.In Error! Reference source not found., hHidden(x) is To make the k-nearest-neighbor process description easier, several terms will be defined. The final data points of the data series are the reference, and the length of the reference is the window size. The data series without the last data point is the shortened data series. To forecast the data series’ next data point, the reference is compared to the first group of data points in the shortened data series, called a candidate, and an error is computed. Then the reference is moved one data point forward to the next candidatethe sigmoid activation function of the unit and is and another error is computed, and so on. All errorscharted in Error! Reference source not found.. are stored and sorted. The smallest k errors correspond to the k candidates that closest match the Fig: Nearest Neighbor TransformationFig: Prediction Graph for Forwarding nodeOther types of activation functions exist, but thesigmoid was implemented for this research. To avoidsaturating the activation function, which makestraining the network difficult, the training data mustbe scaled appropriately. Similarly, before training, theweights and biases are initialized to appropriatelyscaled values. reference. Finally, the forecast will be the average of the k data points that follow these candidates. Then, to forecast the next data point, the 1859 | P a g e
  • 4. Nageswara Rao Puli, Nagul Shaik, M.Kishore Kumar / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 2, Issue4, July-August 2012, pp.1857-1860process is repeated with the previously forecasted data [4]. Hebb, D. O. (1949). The Organization ofpoint appended to the end of the data series. Behavior: A Neuropsychological Theory. New York: Wiley & Sons.II. CONCLUSION [5]. Kingdon, J. (1997). Intelligent Systems and Section Error! Reference source not found. Financial Forecasting. New York: Springer-introduced time series forecasting, described the work Verlag.presented in the typical neural network paper, which [6]. Lawrence, S., Tsoi, A. C., & Giles, C. L.justified this paper, and identified several difficulties (1996). Noisy Time Series Prediction Usingassociated with time series forecasting. Among these Symbolic Representation and Recurrentdifficulties, noisy and nonstationary data were Neural Network Grammatical Inferenceinvestigated further in this paper. Section Error! [Online]. Available:Reference source not found. also presented feed- http://www.neci.nj.nec.com/homepages/lawrforward neural networks and backpropagation ence/papers/finance-tr96/latex.html [Marchtraining, which was used as the primary time series 27, 2000].forecasting technique in this paper. Finally, k-nearest- [7]. McCulloch, W. S., & Pitts, W. H. (1943). Aneighbor was presented as an alternative forecasting Logical Calculus of the Ideas Imminent intechnique. Nervous Activity. Bulletin of MathematicalSection Error! Reference source not found. briefly Biophysics, 5, 115-133.discussed previous time series forecasting papers. The [8]. Minsky, M., & Papert, S. (1969).most notable of these being the paper by Drossu and Perceptrons: An Introduction toObradovic (1996), who presented compelling research Computational Geometry. Cambridge, MA:combining stochastic techniques and neural networks. MIT Press.Also of interest were the paper by Geva (1998) and [9]. Rosenblatt, F. (1962). Principles ofthe book by Kingdon (1997), which took significantly Neurodynamics: Perceptrons and the Theorymore sophisticated approaches to time series of Brain Mechanisms. Washington, D. C.:forecasting. Spartan.Section Error! Reference source not found. [10]. Rumelhart, D. E., Hinton, G. E., &presented Forecaster and went through several Williams, R. J. (1986). Learning Internalimportant aspects of its design, including parsing data Representations by Error Propagation. In D.files, using the Wizard to create networks, training E. Rumelhart, et al. (Eds.), Parallelnetworks, and forecasting using neural networks and Distributed Processing: Explorations in thek-nearest-neighbor. Microstructures of Cognition, 1: Section Error! Reference source not found. Foundations, 318-362. Cambridge, MA:presented the crux of the paper. First, the data series MIT Press.used in the evaluation were described, and then [11]. Torrence, C., & Compo, G. P. (1998). Aparameters and procedures used in forecasting were Practical Guide to Wavelet Analysisgiven. Among these was a method for selecting the [Online]. Bulletin of the Americannumber of neural network inputs based on data series Meteorological Society. Available:characteristics (also applicable to selecting the http://paos.colorado.edu/research/wavelets/window size for k-nearest-neighbor), a training [July 2, 2000].heuristic, and a metric for making quantitative forecast [12]. Zhang, X., & Thearling, K. (1994). Non-comparisons. Finally, a variety of charts and tables, Linear Time-Series Prediction by Systematicaccompanied by many empirical observations, were Data Exploration on a Massively Parallelpresented for networks trained heuristically and Computer [Online]. Available:simply and for k-nearest-neighbor. http://www3.shore.net/~kht/text/sfitr/sfitr.ht m [March 27, 2000].REFERENCES [1]. Drossu, R., & Obradovic, Z. (1996). Rapid Design of Neural Networks for Time Series Prediction. IEEE Computational Science & Engineering, Summer 1996, 78-89. [2]. Geva, A. (1998). ScaleNet—Multiscale Neural-Network Architecture for Time Series Prediction. IEEE Transactions on Neural Networks, 9(5), 1471-1482. [3]. Gonzalez, R. C. & Woods, R. E. (1993). Digital Image Processing. New York: Addison-Wesley. 1860 | P a g e