Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Stock Ranking - A Neural Networks Approach

4,882 views

Published on

  • Be the first to comment

Stock Ranking - A Neural Networks Approach

  1. 1. STOCK RANKING A NEURAL NETWORKS APPROACH RITURAJ B.Tech. (EEE) NIT Calicut Copyright © 2008 Rituraj All Rights Reserved
  2. 2. Topic to be Discussed <ul><li>What is ‘Stock Ranking’ </li></ul><ul><li>What is ‘Neural Network’ </li></ul><ul><li>Why use Neural Networks in Stock Ranking </li></ul><ul><li>Methodology </li></ul><ul><li>Variable Selection </li></ul><ul><li>Data Collection </li></ul><ul><li>Data Pre-processing </li></ul><ul><li>Neural Network Selection </li></ul><ul><li>Training, Testing And Validation </li></ul><ul><li>Evaluation Criteria </li></ul><ul><li>Result And Conclusion </li></ul><ul><li>Future Scope </li></ul><ul><li>References </li></ul>Copyright © 2008 Rituraj All Rights Reserved
  3. 3. What is ‘Stock Ranking’ <ul><li>It is the task of assigning ratings 1 to different stocks (shares, bonds, etc) based on its forecast, such that it facilitates the construction of a portfolio 2 . </li></ul><ul><li>Ratings as given by CRISIL and Moodys. </li></ul><ul><li>Portfolio is a collection of investment </li></ul>Mainly, it entails the comparison and prediction of the performance of stocks over a period of time. It is the task of assigning ratings 1 to different stocks (shares, bonds, etc) based on its forecast, such that it facilitates the construction of a portfolio 2 . Mainly, it entails the comparison and prediction of the performance of stocks over a period of time. Copyright © 2008 Rituraj All Rights Reserved
  4. 4. What is ‘Neural Network’ <ul><li>A massively parallel distributed processor that has a natural propensity for storing experimental knowledge and making it available for use. </li></ul><ul><li>- Simon Haykin </li></ul>Biological Neuron Copyright © 2008 Rituraj All Rights Reserved
  5. 5. C omparison between B rain and D igital C omputer Interesting point is that even with slower processing speed, our brain outperforms the Digital Computer. Copyright © 2008 Rituraj All Rights Reserved
  6. 6. NEURON MODEL <ul><li>Artificial </li></ul><ul><li>Neuron </li></ul><ul><li>the b asic element of a neural network </li></ul><ul><li>it processes the information and stores it for </li></ul><ul><li> future use. </li></ul><ul><li>modelled after the biological neuron, having </li></ul><ul><li> interconnections , the adder and the </li></ul><ul><li> non-linear activation function . </li></ul>Copyright © 2008 Rituraj All Rights Reserved
  7. 7. NEURAL NETWORK ARCHITECTURES Copyright © 2008 Rituraj All Rights Reserved
  8. 8. NEURAL NETWORK LEARNING ALGORITHMS <ul><li>Choice depends on the kind of Network used and the problem at hand. </li></ul><ul><li>Supervised Learning - LMS Algorithm, Back-Propagation Algorithm, Delta learning </li></ul><ul><li>Unsupervised Learning - Hebbian learning, Competitive learning </li></ul>Copyright © 2008 Rituraj All Rights Reserved
  9. 9. Why use Neural Networks in Stock Ranking <ul><li>U niversal function approximators that can map any nonlinear function . </li></ul><ul><li>P owerful methods for pattern recognition, classification, and forecasting . </li></ul><ul><li>L ess sensitive to error term assumptions . </li></ul><ul><li>C an tolerate noise, chao s. </li></ul>Copyright © 2008 Rituraj All Rights Reserved
  10. 10. <ul><li>Stock Ranking is a dynamic, non-linear, non-mathematical problem, driven by individual and collective judgement. </li></ul><ul><li>The prices of stocks have unfathomable relationships with a number of parameters which changes over time. </li></ul><ul><li>So, statistical modelling is inherently incapable of addressing this problem. </li></ul>Why use Neural Networks in Stock Ranking E xisting actuarial methods for Stock Ranking are ‘ Time-series prediction’, ‘Regression analysis’ and ‘Multiple Linear Regression’. Copyright © 2008 Rituraj All Rights Reserved
  11. 11. METHODOLOGY <ul><li>Variable selection </li></ul><ul><li>Data collection </li></ul><ul><li>Data pre-processing </li></ul><ul><li>Neural network selection </li></ul><ul><ul><li>Number of hidden layers </li></ul></ul><ul><ul><li>Number of hidden neurons </li></ul></ul><ul><ul><li>Number of output neurons </li></ul></ul><ul><ul><li>Transfer functions </li></ul></ul><ul><li>Training, testing, and validation </li></ul><ul><li>Evaluation criteria </li></ul>Copyright © 2008 Rituraj All Rights Reserved
  12. 12. VARIABLE SELECTION <ul><li>O utput : daily stock prices of a number of companies listed on the Bombay Stock Exchange , preferably in the top 100 by ‘ market capitalisation ’ 1 . </li></ul><ul><li>I nput variables : </li></ul><ul><li>M acroeconomic : Global Crude-oil price 2 </li></ul><ul><li>T echnical : BSE Sensex </li></ul><ul><li>I nter-market data : Rupee-Dollar exchange rate </li></ul><ul><li>Denotes the wealth of the company </li></ul><ul><li>Stands for the Inflation in the economy. </li></ul>Copyright © 2008 Rituraj All Rights Reserved
  13. 13. DATA COLLECTION <ul><li>Energy Information Administration </li></ul><ul><li>BSE </li></ul><ul><li>Rediff Money </li></ul><ul><li>Yahoo Business </li></ul><ul><li>The daily data was collected for the past 10 years with required omissions due to non-working days. </li></ul>Copyright © 2008 Rituraj All Rights Reserved
  14. 14. DATA PRE-PROCESSING <ul><li>Discrepancies in procured data : </li></ul><ul><li>Large ranges of the data making the neural network complex. </li></ul><ul><li>Correlation between day to day data making redundant information. </li></ul><ul><li>Unusual variance in the data, thereby contributing very less to the data set. </li></ul><ul><li>Concentration of data points near the saturation ends of the activation function. </li></ul>Copyright © 2008 Rituraj All Rights Reserved
  15. 15. <ul><li>Using MATLAB functions: </li></ul><ul><li>p restd : pre-processes the network training set by normalizing the inputs and targets so that they have means of zero and standard deviations of one . </li></ul><ul><li>p repca : transforms the input data so that the elements of the input vector set will be uncorrelated </li></ul>DATA PRE-PROCESSING Copyright © 2008 Rituraj All Rights Reserved
  16. 16. NEURAL NETWORK SELECTION <ul><li>Multilayer Perceptron Model using the Back-propagation algorithm , as it is the most widely used network model and algorithm in financial applications . </li></ul>Copyright © 2008 Rituraj All Rights Reserved
  17. 17. <ul><li>Upon simulation in MATLAB, the Scaled Conjugate Gradient algorithm gave an output that converged to the required Mean Square Error (MSE), within the given epochs. </li></ul><ul><li>Syntax: </li></ul><ul><li>net=newff(minmax(p),[3,1],{'tansig','purelin'},'trainscg'); </li></ul><ul><li>newff: make a new feed-forward network net </li></ul><ul><li>minmax: finds the range of rows of matrix p, which is the set of input variables . </li></ul>NEURAL NETWORK SELECTION Copyright © 2008 Rituraj All Rights Reserved
  18. 18. <ul><li>net=newff(minmax(p),[3,1],{'tansig','purelin'},'trainscg'); </li></ul><ul><li>[3,1]: denotes the no. of neurons in the hidden layer and output neuron respectively </li></ul><ul><li>'tansig','purelin': denotes the activation function used in different layers. </li></ul><ul><li>'trainscg': stands for the Scaled Conjugate Gradient algorithm. </li></ul><ul><li>We can change the various training parameters as needed, like the no. of epochs, the training goal, etc as and when required. </li></ul>NEURAL NETWORK SELECTION Copyright © 2008 Rituraj All Rights Reserved
  19. 19. TRAINING, TESTING AND VALIDATION <ul><li>S et of 99 days’ data was pre-processed </li></ul><ul><li>Scaled Conjugate Grading (SCG) algorithm in MATLAB was used. </li></ul>Performance under varying parameters Copyright © 2008 Rituraj All Rights Reserved
  20. 20. <ul><li>Training and Validation curves of SCG network, with 5-neurons in hidden layer, goal 10-5, learning rate 0.005 </li></ul>TRAINING, TESTING AND VALIDATION Copyright © 2008 Rituraj All Rights Reserved
  21. 21. <ul><li>Training and Validation curves of SCG network, with 15-neurons in hidden layer, goal 10-5, learning rate 0.005 </li></ul>TRAINING, TESTING AND VALIDATION Copyright © 2008 Rituraj All Rights Reserved
  22. 22. <ul><li>Training and Validation curves of SCG network, with 15-neurons in hidden layer, goal 10-4, learning rate 0.05 </li></ul>TRAINING, TESTING AND VALIDATION Copyright © 2008 Rituraj All Rights Reserved
  23. 23. <ul><li>Training and Validation curves of SCG network, with 15-neurons in hidden layer, goal 10-4, learning rate 0.005 </li></ul>TRAINING, TESTING AND VALIDATION Copyright © 2008 Rituraj All Rights Reserved
  24. 24. <ul><li>Training and Validation curves of SCG network, with 15-neurons in hidden layer, goal 10-4, learning rate 0.005 </li></ul>TRAINING, TESTING AND VALIDATION Copyright © 2008 Rituraj All Rights Reserved
  25. 25. EVALUATION CRITERIA <ul><li>“ Main point of reference for evaluating the performance of networks is by comparison to current ‘best practices’ i.e. Multiple Linear Regression.” - Refenes AN et al </li></ul><ul><li>Here, the R-value 1 was taken as the evaluating criteria for this comparison. </li></ul>1. Denotes variability in the observations Copyright © 2008 Rituraj All Rights Reserved
  26. 26. <ul><li>S ame set of data was used for Multiple Linear Regression . </li></ul><ul><li>The regression static R was achieved as 0.9357 </li></ul><ul><li>The training done with Neural Networks has a better R value for all the examples . </li></ul>EVALUATION CRITERIA Scatter-Plot of Achieved Value and the Target Value using MLR Copyright © 2008 Rituraj All Rights Reserved
  27. 27. RESULT AND CONCLUSION <ul><li>The performance of Neural Network is enhanced (in terms of MSE, R-value and overall simulation being successful or not) with an increase in the no of neurons in the hidden layer. </li></ul><ul><li>But, with a larger increase in the no of neuron in the hidden layer, it seems that the tendency of the network is to get over-trained, i.e. it learns the results rather than generalising it. </li></ul><ul><li>The goal is met when we decrease the performance criteria, the MSE, even with lesser no of neurons in the hidden layer. </li></ul><ul><li>With an increase in learning rate, the network updates its weights with greater rate, and a decrease in performance is observed. </li></ul>Copyright © 2008 Rituraj All Rights Reserved
  28. 28. <ul><li>Comparing the MLR and Neural Networks approach, it is seen that the latter can be used for Ranking of Stocks. </li></ul><ul><li>This method when applied for a universe of stocks in a portfolio, can predict the future trends and can thus help in building of the portfolio. </li></ul>RESULT AND CONCLUSION Copyright © 2008 Rituraj All Rights Reserved
  29. 29. FUTURE SCOPE <ul><li>Following changes can be incorporated which will increase the dependability of this method: </li></ul><ul><li>The no of days for which the data was taken can be more, more so when it has been mentioned by Simon Haykin that “the appropriate no of training examples is directly proportional to the no of weights in the network and inversely proportional to the accuracy parameter (i.e. performance goal: MSE)”. </li></ul>Copyright © 2008 Rituraj All Rights Reserved
  30. 30. <ul><li>The input parameters taken for this simulation are adequate for this forecasting, but an improved condition is when more related parameters like the Earning per Share (EPS), PE-ratio (Price to earning ratio), etc are taken. </li></ul><ul><li>More advanced algorithms in Back-propagation can be employed. </li></ul><ul><li>Specific tools like the industry standard neural network/adaptive system simulator NeuroSolutions can be used for greater insights. </li></ul>FUTURE SCOPE Copyright © 2008 Rituraj All Rights Reserved
  31. 31. REFERENCES <ul><li>[1], [2] Haykin Simon, “Neural Networks-A Comprehensive Foundation”, Chapter 1, IEEE Press, 1994. </li></ul><ul><li>[3] H. White, “Learning in neural networks: A statistical perspective”, Page 425-464, Neural Computat . 4, 1989. </li></ul><ul><li>[4] T. Masters, “Practical Neural Network Recipes in C + +” , Academic Press, New York, 1993. </li></ul><ul><li>[5] Tan Clarence N W, ‘An Artificial Neural Networks Primer with Financial Applications Examples in Financial Distress Predictions and Foreign Exchange Hybrid Trading System’, School of Information Technology, Bond University, Australia </li></ul><ul><li>[6], [7] Kaastra Iebeling, Boyd Milton, ‘Designing a neural network for forecasting financial and economic time series”, Page 215-236, Neurocomputing 10, 1996, </li></ul><ul><li>[8] MATLAB 7.0 Help Files </li></ul><ul><li>[9] Refenes AN et al, “Stock Ranking:Neural Networks Vs Multiple Linear Regression”,Department of Computer Science, University College of London, UK </li></ul>Copyright © 2008 Rituraj All Rights Reserved
  32. 32. Thank you Copyright © 2008 Rituraj All Rights Reserved

×