• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
40120130405012
 

40120130405012

on

  • 245 views

 

Statistics

Views

Total Views
245
Views on SlideShare
245
Embed Views
0

Actions

Likes
1
Downloads
1
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    40120130405012 40120130405012 Document Transcript

    • International Journal of Electronics and Communication Engineering & Technology (IJECET), ISSN 0976 – INTERNATIONAL JOURNAL OF ELECTRONICS AND 6464(Print), ISSN 0976 – 6472(Online) Volume 4, Issue 5, September – October (2013), © IAEME COMMUNICATION ENGINEERING & TECHNOLOGY (IJECET) ISSN 0976 – 6464(Print) ISSN 0976 – 6472(Online) Volume 4, Issue 5, September – October, 2013, pp. 117-125 © IAEME: www.iaeme.com/ijecet.asp Journal Impact Factor (2013): 5.8896 (Calculated by GISI) www.jifactor.com IJECET ©IAEME FORECASTING STOCK MARKET MOVEMENT: A NEURAL NETWORK APPROACH Ms. Ashwini Pravin Navghane1, Dr.Pradeep Mitharam Patil2 1 Assistant Professor, Department of Electronics & Telecommunication, Vishwakarma Institute of Information Technology, Kondhwa, Pune, India 2 Dean, RMD Sinhgad School of Engineering, Warje, Pune, India ABSTRACT Recent researchers approached several techniques to forecast the stock market movement as the motivation for the financial gain. Traditionally, technical analysis approach that forecast stock prices based on historical prices and volume, basic concepts of trends, price patterns and oscillators is commonly used by stock investors to aid investment decisions. In this paper a computational approach called Neural Network is used for the prediction of stock market prices. The training is done with the back propagation algorithm for the data set of the 5 Indian companies. The dataset encompassed the trading days from Aug ’04 to Jan ’13. The error rate is found for the accuracy prediction in the working platform of MATLAB and is implemented. Keywords: Artificial neural network, Feed forward NN, Back propagation algorithm. I. INTRODUCTION A majority of research studies have aimed at specially predicting the price levels of the stock market indices. But the last few decades there has been growing interest in applications of artificial neural networks for predicting stock market prices. The feed forward network is the most commonly using NN approach for the prediction. Artificial Neural networks inspired by human brain cells’ activity can learn the data patterns and generalize their knowledge to recognize the future new patterns. Nowadays Neural Networks are considered as a common Data Mining method in different fields like science, industry, economy and business [1]. The idea of using neural networks for predicting problems was first modelled by Hu in 1964 which was used for weather forecasting [2]. Werbos used this technique to train a neural network in 1988 and claimed that neural networks are better than regression methods and Box-Jenkins model in 117
    • International Journal of Electronics and Communication Engineering & Technology (IJECET), ISSN 0976 – 6464(Print), ISSN 0976 – 6472(Online) Volume 4, Issue 5, September – October (2013), © IAEME prediction problems [3]. The research on neural network applications continued up to the point that all the winners of the prediction contest in Santafa institute had used neural networks [4]. With the increasing accuracy and precision of analytical measuring methods, it is clear that all effects that are of interest cannot be described by simple uni-variate and even not by the linear multivariate correlations precise. But in recent researches chemists found a set of methods said to be the artificial neural networks [5]. The ANNs are difficult to describe with a simple definition. Maybe the closest description would be a comparison with a black box having multiple inputs and multiple outputs which operates using a large number of mostly parallel connected simple arithmetic units. ANN tasks can be classified into the following categories: Approximation: To determine the weights that minimize the (least-square or absolute) error distance between the produced output and the target output [6]. This is somewhat equivalent to regression analysis in statistics, using an analytical procedure to solve the normal equations and to find the regression coefficients. Optimization: To determine the optimal solution to NP-complete, such as the travelling salesperson problem [7]. Classification: To classify an object characterized by its input vector into one of different categories or groups. The input vector may have continuous or discrete values. Prediction: To predict the output values from the input values. While the input values may be continuous or discrete, the output values are continuous; this makes it different from a classification task, being equivalent to making predictions and forecasts in multivariate statistics. II. RELATED WORKS Sagar et al. [8] have proposed a method in neural network; this method was measured as an adaptive system that increasingly self-organizes in order to estimate the solution, making the problem solver free from the need to precisely and decidedly specify the steps headed for the solution. Furthermore, with artificial Neural Network, Evolutionary computation could be integrated to amplify the performance at a variety of levels; in result such neural network was called Evolutionary ANN. In that paper, they proposed one issue of neural network namely adjustment of connection weights for learning presented by Genetic algorithm over feed forward architecture. The performance of developed solution comparison had given with respect to well established method of learning called gradient decent method. A benchmark problem of classification, XOR, has taken to justify the experiment. Presented method was not only having very probability to attain the global minima but also having very fast convergence. Ahmad M. Sarhan [9] has developed an ANN and the Discrete Cosine Transform (DCT) based stomach cancer detection system. Classification features are extracted by the proposed system from stomach microarrays utilizing DCT. ANN does the Classification (tumor or no-tumor) upon application of the features extracted from the DCT coefficients. In his study he has used the microarray images that were obtained from the Stanford Medical Database (SMD). The ability of the proposed system to produce very high success rate has been confirmed by simulation results. Yoda in [10] investigated the predictive capacity of the neural network models for the Tokyo Stock Exchange. The study of stock prediction can be broadly divided into two schools of thought. One focuses on computer experiments in virtual/artificial markets and other on stock prediction based on real-life financial data White [11] published the first significant study on the application of the neural network models for stock market forecasting. 118
    • International Journal of Electronics and Communication Engineering & Technology (IJECET), ISSN 0976 – 6464(Print), ISSN 0976 – 6472(Online) Volume 4, Issue 5, September – October (2013), © IAEME III. PROPOSED METHODOLOGY Since last several years, it has been man’s common goal to make his life easier. So everybody is trying to find the simplest way to earn the money. Hence financial gain is the first important motivation for forecasting the stock market prices. Recently forecasting stock market price is gaining more attention, maybe because of the fact that if the direction of the market is successfully predicted the investors may be better guided. The profitability of investing and trading in the stock market to a large extent depends on the predictability. The stock market price prediction using the neural network approach is discussed in the following sections. III.1 Prediction using FFNN The revised simplest model of ANN is the feed forward neural network. Feed-Forward Neural Network (FFNN) consists of at least three layers of neurons: an input layer, at least one intermediate hidden layer, and an output layer. Typically, neurons are connected in a feed-forward fashion with input units fully connected to neurons in the hidden layer and hidden neurons fully connected to neurons in the output layer. There are some parameters, which we have to use during the training period. They are error, threshold, tolerance, etc. The accurate results depend on these parameters. To achieve our target rapidly, error is pre-processed. A feed-forward network maps a set of input values to a set of output values and the graphical representation of a parametric function is supposed. The neural network to train the dataset is shown in the Figure (1). Figure 1. Typical feed forward neural network composed of three layers The steps involved in feed forward neural network are discussed below: Step1: Initialize all the dataset like input, output and weight of the neurons Step 2: The error term is defined as the relation between ( Y K ) actual output of the pattern K and ( O K ) required output of the pattern K. Error = 119 1 2 ∑ K (Y K − O K )2 (1)
    • International Journal of Electronics and Communication Engineering & Technology (IJECET), ISSN 0976 – 6464(Print), ISSN 0976 – 6472(Online) Volume 4, Issue 5, September – October (2013), © IAEME Step 3: Finding weights deviation in Hidden layer δ W = Error × γ × α (2) γ → Learning rate α → Average of hidden layer Step 4: Finding new output weights for the ‘K’ neuron in the output layer W (H Pτ − Y ρ ) new = W ( H Pτ − Yρ ) + δ W (3) τ = [1 , 2 , 3 , .... N ] ρ = [1 , 2 , 3 , .... K ] Step 5: Terminate the process at the occurrence of minimized error III.2 Training using BP algorithm The algorithm returns the future values as outputs from the trained neural network values as inputs. Here Back propagation algorithm (BPA) is used for future prediction of stock market rates. In back-propagation algorithm the steepest-descent minimisation method is used. Back propagation requires that the activation function used by the artificial neurons be differentiable. The steps involved in Back Propagation Algorithm are discussed below: Step 1: Assign random weights to the links range [0, 1] in the Artificial Neural network above Step 2: In case of training perform output calculation based on two functions i.e. Basis function (the product of weights and inputs) and Activation function (non-linear). Step 3: In order to determine the BP error using Eq. (6), the training data set is given to the NN. Eq. (4) and Eq. (5) show the basis function and activation function, whereas wij is the weight of the neuron, α is the bias and ‘X’ ranges [0, 1]. ' N g' −1 Yi = α + ∑w , ij 0 ≤ i ≤ N s'' − 1 (4) j =0 H = 1 1 + exp( − X ) (5) Step 4: The weights of all the neurons are adjusted when the BP error is determined as follows, wij = wij + ∆wij (6) The change in weight ∆wij given in Eq. (3) can be determined as ∆wij = γ .y ij . E , where E is the BP error and γ is the learning rate; normally it ranges from 0.2 to 0.5. Step 5: After adjusting the weights, steps (2) and (3) are repeated until the BP error gets minimized. Normally, it is repeated till the criterion, E < 0.1 is satisfied. Thus the stock market prediction is obtained using BP algorithm which is trained in the neural network. 120
    • International Journal of Electronics and Communication Engineering & Technology (IJECET), ISSN 0976 – 6464(Print), ISSN 0976 – 6472(Online) Volume 4, Issue 5, September – October (2013), © IAEME IV. IMPLEMENTATION AND RESULTS Forecasting share market price index can be considered as an example of Time series forecasting which analyses past data and projects estimates of future data values. Basically, this method attempts to model a nonlinear function by a recurrence relation derived from past values. The recurrence relation can then be used to predict new values in the time series, which hopefully will be good approximations of the actual values. Thus the prediction is done in the working platform as MATLAB using feed forward NN. For the implementation, 10 years stock prices data of five Indian companies: Infosys, Wipro, Bharthiartl, Tech Mahindra and TCS are collected in monthly basis with four indices, said to be opening price, closing price, high and low. The NN is trained using 3 input layers (Opening price, high, low), 1 output layer (Closing price) and 20 hidden layers. Using the Back propagation algorithm future price of the foresaid companies are predicted and graphical description is shown below. The 10 years data of the companies are represented as ‘actual result’ and forecasted price data is represented as ‘predict result’. And the error deviation of the particular company is predicted and plotted. Figure 2. Predicted closing price of Bharthiartl using BPA Figure 3. Error deviation of Bharthiartl using BPA 121
    • International Journal of Electronics and Communication Engineering & Technology (IJECET), ISSN 0976 – 6464(Print), ISSN 0976 – 6472(Online) Volume 4, Issue 5, September – October (2013), © IAEME Figure 4. Predicted closing price of Infosys using BPA Figure 5. Error deviation of Infosys using BPA Figure 6. Predicted closing price of Tech Mahindra using BPA 122
    • International Journal of Electronics and Communication Engineering & Technology (IJECET), ISSN 0976 – 6464(Print), ISSN 0976 – 6472(Online) Volume 4, Issue 5, September – October (2013), © IAEME Figure 7. Error deviation of Tech Mahindra using BPA Figure 8. Predicted closing price of TCS using BPA Figure 9. Error deviation of TCS using BPA 123
    • International Journal of Electronics and Communication Engineering & Technology (IJECET), ISSN 0976 – 6464(Print), ISSN 0976 – 6472(Online) Volume 4, Issue 5, September – October (2013), © IAEME Figure 10. Predicted closing price of Wipro using BPA Figure 11. Error deviation of Wipro using BPA V. CONCLUSION In this research Indian stock market prices of 5 companies are forecasted with the 10 years of data from 2004-2013 using artificial neural network. The feed forward NN is used to train the dataset and back propagation algorithm is used for prediction. While analysing the graphical representations in the section 4, it is clear that prediction results are encouraging hence the error rates are moderate. So it is finally concluded that forecasting the stock market price prediction is accurate using neural network approach. REFERENCES [1] [2] [3] [4] G. Grudnitzky and L. Osburn, “Forecasting S&P and Gold Futures Prices: An Application of Neural Networks,” Journal of Futures Markets, vol. 13, No. 6, pp. 631-643, September 1993. M.J.C. Hu, “Application of the Adaline System to Weather Forecasting,” Master Thesis, Technical Report 6775-1, Stanford Electronic Laboratories, Stanford, CA, June 1964. P.J.Werbos, “Generalization of back propagation with application to a recurrent gas market model,” Neural Networks, Vol 1, pp. 339-356, 1988. A.S. Weigend and N.A. Gershenfeld, “Time Series Prediction: Forecasting the Future and Understanding the Past,” Addison Wesley, Reading, MA, 1993. 124
    • International Journal of Electronics and Communication Engineering & Technology (IJECET), ISSN 0976 – 6464(Print), ISSN 0976 – 6472(Online) Volume 4, Issue 5, September – October (2013), © IAEME [5] [6] [7] [8] [9] [10] [11] [12] [13] J. Gasteiger, J. Zupan, Neuronale Netze and Angew, “Neural Networks in Chemistry” Chem. Intl. Ed. Engl. 32, (1993) 503-527. Kung, S.Y. Diamantaras, K., Mao, W.D. and Taur. J.S. “Generalized Perceptron Networks with Nonlinear Discriminant Functions”, In R.J. Mammone and Y. Zeevi (Eds.), Neural Networks: Theory and Applications. Boston: Academic Press, Inc., 1991, pp. 245-279. Hopfield. J.J. and Tank. D, “Neural Computation of Decisions in Optimization Problems” Biological Cybernetics, Vol. 52, 1985. pp. 147-152. Sagar, Venkata Chalam and Manoj Kumar Singh, “Evolutionary Algorithm for Optimal Connection Weights in Artificial Neural Networks”, International Journal of Engineering (IJE), Vol. 5, No. 5, 2011 Ahmad m. Sarhan, "Cancer Classification Based on Microarray Gene Expression Data using DCT and ANN", Journal of Theoretical and Applied Information Technology, Vol.6, No.2, pp.207-216, 2009 M. Yoda, Predicting the Tokyo Stock Market. In: Trading on The Edge, Deboeck, G.J. (Ed.), John Wiley & Sons Inc., 1994, pp. 66-79. H. White, “Economic prediction using Neural Networks: The case of IBM daily stock returns”, In Proc. of the IEEE International Conference on Neural Networks, 1988, pp. 451-458. D.A.Kapgate and Dr.S.W.Mohod, “Short Term Load Forecasting using Hybrid Neuro- Wavelet Model”, International Journal of Electronics and Communication Engineering & Technology (IJECET), Volume 4, Issue 2, 2013, pp. 280 - 289, ISSN Print: 0976- 6464, ISSN Online: 0976 –6472. K. V. Sujatha and S. Meenakshi Sundaram, “Regression, Theil’s and MLP Forecasting Models of Stock Index”, International Journal of Computer Engineering & Technology (IJCET), Volume 1, Issue 1, 2010, pp. 82 - 91, ISSN Print: 0976 – 6367, ISSN Online: 0976 – 6375. AUTHORS’ INFORMATION Ms. Ashwini Pravin Navghane obtained her Bachelor’s degree in Electronics & Telecommunication from ‘Institution of Electronics & Telecommunication Engineers’, University of Delhi. Then she obtained her Master’s degree in Electronics & Telecommunication from ‘Pune Institute of Computer Technology’ (PICT), University of Pune, India. Currently, she is an Assistant Professor in the Department of Electronics & Telecommunication in ‘Vishwakarma Institute of Information Technology’, Kondhwa, Pune, India. Her specializations include Microwave, Image Processing and Neural Networks. Her current research interests are Artificial Neural Networks, Pattern Recognition. Dr. Pradeep Mitharam Patil received his B. E. (Electronics) degree in 1988 from Amravati University, Amravati, (India) and M. E. (Electronics) degree in 1992 from Marathwada University, Aurangabad, (India). He received his Ph.D. degree in Electronics and Computer Engineering in 2004 at Swami Ramanand Teerth Marathwada University, (India). From 1988 to 2011 he worked as Lecturer and Assistant Professor and Professor in department of Electronics Engineering at various engineering colleges in Pune University, (India). Presently he is working as Dean, RMD Sinhgad School of Engineering and Director of RMD Sinhgad Technical Institutes Campus, Warje, Pune, (India). He is member of various professional bodies like IE, ISTE, IEEE and Fellow of IETE. He has been recognized as a PhD guide by various Universities in the state of Maharashtra (India). His research areas include pattern recognition, neural networks, fuzzy neural networks and power electronics. His work has been published in various international and national journals and conferences including IEEE and Elsevier. 125