Generating forecast is undoubtedly one of the most
important sector in any industry. It not only helps the demand
planner of the company which is focusing primarily on the future
forecasting of the sales but also the inventory management and
its handling cost. Down the line of the companys supply chain,
a wrong forecast can impact in an alternate way of incurring
heavy cost. Apart from this a correct forecasting leading to a
good accuracy says how the profit and loss of the company
depends upon. This heavily depends upon the predicting machine
that the company is using for the forecasting. Demand planners
look for a consistently accurate results over a period of time.
Where the accuracy of the forecast is not only good but also
the algorithm is stable in a long period of time. This we have
illustrated through the analysis and the model takes care of the
stability of a particular algorithm selected for a SKU. In this
paper we want to highlight a new ensemble technique using the
averaging method which not only gives priority to the algorithm
which consistently maintains a good accuracy but also decreases
the deviation from the actual sales. Based upon the history it
tries to give the importance to the one which predicts better and
penalizes other algorithms which deviate from the actual sales.
Basic Civil Engineering first year Notes- Chapter 4 Building.pptx
TFFN: Two Hidden Layer Feed Forward Network using the randomness of Extreme Learning Machine
1. TFFN: Two Hidden Layer Feed Forward Network using the
randomness of Extreme Learning Machine
Authors:
Nimai Chand Das Adhikari
Arpana Alka
Dr. Raju K George
Indian Institute of Space Science and Technology, Trivandrum
2. Problem Statement
• Use of Back Propagation Algorithm for Feed Forward Neural Networks
• Local Optima
• Trivial Manual Intervention
• Time Consumption
8. TFFN: Theorems behind it
• Universal approximation capability
• Classification Capability Theorem
• Moore Penrose Generalized inverse
• Least Squares Solution
9. Conclusions
• The randomness concept avoids the term “Iterations” as in case of Back
Propagation
• It has a better “Generalization Capability” than Back Propagation
• This automatically optimizes the Hyperparameters
• The randomness can scrutinize the algorithm to “Move in one direction only”
• The algorithm can’t assign priority on the basis of the training dataset
10. References
[1] Huang, Gao, et al. ”Trends in extreme learning machines: A review.” Neural Networks 61 (2015): 32-48.
[2] Cambria, Erik, et al. ”Extreme learning machines [trends & controversies].” IEEE Intelligent Systems 28.6
(2013): 30- 59.
[3] Huang, Guang-Bin. ”An insight into extreme learning machines: random neurons, random features and
kernels.” Cognitive Computation 6.3 (2014): 376-390.
[4] Huang, Guang-Bin, et al. ”Extreme learning machine for regression and multiclass classification.” IEEE
Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) 42.2 (2012): 513-529.
[5] Huang, Guang-Bin, Qin-Yu Zhu, and Chee-Kheong Siew. ”Extreme learning machine: theory and
applications.” Neurocomputing 70.1 (2006): 489-501.
[6] Funahashi, Ken-Ichi. ”On the approximate realization of continuous mappings by neural networks.” Neural
networks 2.3 (1989): 183-192.
[7] Anthony, Martin, and Peter L. Bartlett. Neural network learning: Theoretical foundations. cambridge
university press, 2009. [8] Huang, Guang-Bin, and Lei Chen. ”Convex incremental extreme learning machine.”
Neurocomputing 70.16 (2007): 3056-3062.
[9] Albert, Arthur. Regression and the Moore-Penrose pseudoinverse. Elsevier, 1972.
[10] Penrose, Roger. ”A generalized inverse for matrices.” Mathematical proceedings of the Cambridge
philosophical society. Vol. 51. No. 3. Cambridge University Press, 1955.
[11] Golub, Gene H., and Charles F. Van Loan. ”An analysis of the total least squares problem.” SIAM Journal on
Numerical Analysis 17.6 (1980): 883-893.
[12] Lawson, Charles L., and Richard J. Hanson. Solving least squares problems. Society for Industrial and
Applied Mathematics, 1995.
[13] Huang, Guang-Bin, et al. ”Extreme learning machine for regression and multiclass classification.” IEEE
Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) 42.2 (2012): 513-529.