IOSR Journal of Business and Management (IOSR-JBM) is an open access international journal that provides rapid publication (within a month) of articles in all areas of business and managemant and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications inbusiness and management. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
This document discusses using machine learning techniques to forecast financial time series and predict stock market prices. It provides an overview of various machine learning and statistical methods that have been used for stock prediction, including regression, support vector machines, decision trees, neural networks, and random forests. The authors aim to formulate short-term and long-term predictions of stock price direction, changes in price, and actual price. They collect historical stock price and technical indicator data and use feature selection and scaling before applying classification and regression models to achieve 81% accuracy for trend direction and RMSE errors of 0.0117 and 0.0613 for next day price and change, respectively.
Artificial Intelligence and Stock Marketingijsrd.com
Business sagacity is turning into a significant pattern in money related world. One such range is securities exchange knowledge that makes utilization of information mining strategies, for example, affiliation, grouping, fake neural systems, choice tree, hereditary calculation, master frameworks and fuzzy rationale. These strategies could be utilized to anticipate stock value or exchanging indicator naturally with adequate exactness. In spite of the fact that there has been a loads of exploration done here , still there are numerous issues that have not been investigated yet furthermore it is not clear to new analysts where and how to begin . Information mining could be connected on over a significant time span monetary information to create examples and choice making framework. This paper gives concise review of a few endeavors made via scientists for stock expectation by concentrating on securities exchange dissection furthermore characterizes another exploration space to comprehend the sagacity of stock exchange. This alludes as stock exchange brainpower, which is to create information mining strategies to help all parts of algorithmic exchanging furthermore recommend various exploration issues in stock knowledge identified with guaging& its exactness.
This paper examines using different textual representations of financial news articles - bag of words, noun phrases, and named entities - to predict stock prices 20 minutes after an article is released, using support vector machines. The study finds that named entities outperform bag of words, the typical representation used. It introduces the topic of using textual analysis to predict stock prices and reviews prior literature on stock prediction using both fundamental and technical analysis approaches as well as textual data.
The document introduces a fuzzy logic system for momentum analysis to improve high-frequency order execution. It proposes using fuzzy logic to analyze current market conditions and momentum to determine optimal order placement. The system uses fuzzy rules and membership functions to analyze market inputs and determine buy or sell signals. The system is shown to outperform traditional volume-based order execution strategies by increasing profits on both buy and sell orders.
An Innovative Approach to Predict Bankruptcyvivatechijri
Bankruptcy is a legal status of a person or other organization that cannot repay their debts to
creditors. Bankruptcy prediction is the task of predicting bankruptcy and by doing various surveys we can avoid
financial distress of firms. It is a huge area of accounting and finance research. The significance of this area is
an important part of financial specialists and creditors in assessing the probability that a firm may go bankrupt
or not. Estimating the risk of corporate bankruptcies is very important as the effect of bankruptcy is on a global
level. The aim of predicting financial distress is to develop a predictive model that combines various economic
factors which allow foreseeing the financial status of a firm. In this domain, various methods were proposed that
were based on neural networks, Support Vector Machines, Decision Trees, Random Forests, Naïve Bayes,
Balanced Bagging and Logistic Regression. In this paper, we document our observations as we explore and build
a Restricted Boltzmann Machine to Bankruptcy Prediction. We started by carrying out data pre-processing where
we impute the missing data values using Mean Imputation. To solve the data imbalance issue, we apply the
Synthetic Minority Oversampling Technique (SMOTE) to oversample the minority class labels. Finally, we
analyze and evaluate the performance of the model.
This document is a thesis submitted by Jai Kedia for a degree in mathematics and business economics. It examines alternative risk measures to the traditional beta measure in predicting stock returns. The thesis provides an introduction and acknowledges the contributions of the advisors. It then presents an abstract that outlines the goal of analyzing if alternative risk measures such as higher moments, size, leverage, and price-to-book ratio can improve predictions of stock returns beyond just beta. Finally, it presents a table of contents that outlines the various chapters covering the return/risk relationship, modern portfolio theory, mathematical analysis of stock prices, a literature review on previous empirical studies, the empirical analysis conducted, and a conclusion.
COMPARISON OF BANKRUPTCY PREDICTION MODELS WITH PUBLIC RECORDS AND FIRMOGRAPHICScscpconf
Many business operations and strategies rely on bankruptcy prediction. In this paper, we aim to
study the impacts of public records and firmographics and predict the bankruptcy in a 12-
month-ahead period with using different classification models and adding values to traditionally
used financial ratios. Univariate analysis shows the statistical association and significance of
public records and firmographics indicators with the bankruptcy. Further, seven statistical
models and machine learning methods were developed, including Logistic Regression, Decision
Tree, Random Forest, Gradient Boosting, Support Vector Machine, Bayesian Network, and
Neural Network. The performance of models were evaluated and compared based on
classification accuracy, Type I error, Type II error, and ROC curves on the hold-out dataset.
Moreover, an experiment was set up to show the importance of oversampling for rare event
prediction. The result also shows that Bayesian Network is comparatively more robust than
other models without oversampling.
IRJET- Stock Market Prediction using ANNIRJET Journal
This document summarizes a research paper that aims to predict stock prices using artificial neural networks (ANN). It discusses using factors like moving averages, stochastic oscillator, standard deviation, and on-balance volume as inputs to an ANN model to predict stock prices. The motivation is to help investors make better investment decisions. ANN is chosen because it can model nonlinear relationships better than linear models. Prior literature shows ANN models have achieved higher accuracy than support vector machine or linear regression models for stock price prediction. The methodology section describes the technical indicators used as inputs and the multi-layer perceptron ANN algorithm with backpropagation that is implemented.
This document discusses using machine learning techniques to forecast financial time series and predict stock market prices. It provides an overview of various machine learning and statistical methods that have been used for stock prediction, including regression, support vector machines, decision trees, neural networks, and random forests. The authors aim to formulate short-term and long-term predictions of stock price direction, changes in price, and actual price. They collect historical stock price and technical indicator data and use feature selection and scaling before applying classification and regression models to achieve 81% accuracy for trend direction and RMSE errors of 0.0117 and 0.0613 for next day price and change, respectively.
Artificial Intelligence and Stock Marketingijsrd.com
Business sagacity is turning into a significant pattern in money related world. One such range is securities exchange knowledge that makes utilization of information mining strategies, for example, affiliation, grouping, fake neural systems, choice tree, hereditary calculation, master frameworks and fuzzy rationale. These strategies could be utilized to anticipate stock value or exchanging indicator naturally with adequate exactness. In spite of the fact that there has been a loads of exploration done here , still there are numerous issues that have not been investigated yet furthermore it is not clear to new analysts where and how to begin . Information mining could be connected on over a significant time span monetary information to create examples and choice making framework. This paper gives concise review of a few endeavors made via scientists for stock expectation by concentrating on securities exchange dissection furthermore characterizes another exploration space to comprehend the sagacity of stock exchange. This alludes as stock exchange brainpower, which is to create information mining strategies to help all parts of algorithmic exchanging furthermore recommend various exploration issues in stock knowledge identified with guaging& its exactness.
This paper examines using different textual representations of financial news articles - bag of words, noun phrases, and named entities - to predict stock prices 20 minutes after an article is released, using support vector machines. The study finds that named entities outperform bag of words, the typical representation used. It introduces the topic of using textual analysis to predict stock prices and reviews prior literature on stock prediction using both fundamental and technical analysis approaches as well as textual data.
The document introduces a fuzzy logic system for momentum analysis to improve high-frequency order execution. It proposes using fuzzy logic to analyze current market conditions and momentum to determine optimal order placement. The system uses fuzzy rules and membership functions to analyze market inputs and determine buy or sell signals. The system is shown to outperform traditional volume-based order execution strategies by increasing profits on both buy and sell orders.
An Innovative Approach to Predict Bankruptcyvivatechijri
Bankruptcy is a legal status of a person or other organization that cannot repay their debts to
creditors. Bankruptcy prediction is the task of predicting bankruptcy and by doing various surveys we can avoid
financial distress of firms. It is a huge area of accounting and finance research. The significance of this area is
an important part of financial specialists and creditors in assessing the probability that a firm may go bankrupt
or not. Estimating the risk of corporate bankruptcies is very important as the effect of bankruptcy is on a global
level. The aim of predicting financial distress is to develop a predictive model that combines various economic
factors which allow foreseeing the financial status of a firm. In this domain, various methods were proposed that
were based on neural networks, Support Vector Machines, Decision Trees, Random Forests, Naïve Bayes,
Balanced Bagging and Logistic Regression. In this paper, we document our observations as we explore and build
a Restricted Boltzmann Machine to Bankruptcy Prediction. We started by carrying out data pre-processing where
we impute the missing data values using Mean Imputation. To solve the data imbalance issue, we apply the
Synthetic Minority Oversampling Technique (SMOTE) to oversample the minority class labels. Finally, we
analyze and evaluate the performance of the model.
This document is a thesis submitted by Jai Kedia for a degree in mathematics and business economics. It examines alternative risk measures to the traditional beta measure in predicting stock returns. The thesis provides an introduction and acknowledges the contributions of the advisors. It then presents an abstract that outlines the goal of analyzing if alternative risk measures such as higher moments, size, leverage, and price-to-book ratio can improve predictions of stock returns beyond just beta. Finally, it presents a table of contents that outlines the various chapters covering the return/risk relationship, modern portfolio theory, mathematical analysis of stock prices, a literature review on previous empirical studies, the empirical analysis conducted, and a conclusion.
COMPARISON OF BANKRUPTCY PREDICTION MODELS WITH PUBLIC RECORDS AND FIRMOGRAPHICScscpconf
Many business operations and strategies rely on bankruptcy prediction. In this paper, we aim to
study the impacts of public records and firmographics and predict the bankruptcy in a 12-
month-ahead period with using different classification models and adding values to traditionally
used financial ratios. Univariate analysis shows the statistical association and significance of
public records and firmographics indicators with the bankruptcy. Further, seven statistical
models and machine learning methods were developed, including Logistic Regression, Decision
Tree, Random Forest, Gradient Boosting, Support Vector Machine, Bayesian Network, and
Neural Network. The performance of models were evaluated and compared based on
classification accuracy, Type I error, Type II error, and ROC curves on the hold-out dataset.
Moreover, an experiment was set up to show the importance of oversampling for rare event
prediction. The result also shows that Bayesian Network is comparatively more robust than
other models without oversampling.
IRJET- Stock Market Prediction using ANNIRJET Journal
This document summarizes a research paper that aims to predict stock prices using artificial neural networks (ANN). It discusses using factors like moving averages, stochastic oscillator, standard deviation, and on-balance volume as inputs to an ANN model to predict stock prices. The motivation is to help investors make better investment decisions. ANN is chosen because it can model nonlinear relationships better than linear models. Prior literature shows ANN models have achieved higher accuracy than support vector machine or linear regression models for stock price prediction. The methodology section describes the technical indicators used as inputs and the multi-layer perceptron ANN algorithm with backpropagation that is implemented.
Corporate bankruptcy prediction using Deep learning techniquesShantanu Deshpande
This document proposes using deep learning techniques like LSTM neural networks to predict corporate bankruptcy by integrating both financial ratio data and textual disclosures from annual reports. It notes that previous studies have largely relied on statistical models or used only financial data with machine learning. The researcher aims to determine if adding textual data to an LSTM model improves prediction performance over a CNN model using only financial ratios. The document outlines the research question, objectives, and provides an overview of previous bankruptcy prediction studies using statistical, machine learning and deep learning methods.
This document discusses using artificial neural networks (ANNs) to enhance stock picking and investment strategies by incorporating earnings forecasts from financial analysts. It aims to compare different ANN models and identify the best model for forecasting stock prices and improving investment profitability. The study uses quarterly data on stock prices, indexes, analyst earnings forecasts and recommendations from 1997-2003 to train and evaluate ANN models. It finds that ANN strategies based on analyst forecasts achieved higher returns than other investment strategies over this period.
Gain Comparison between NIFTY and Selected Stocks identified by SOM using Tec...IOSR Journals
This document discusses a study that uses self-organizing maps (SOM) and technical indicators to identify stocks with potential for investment gains. The study selects stocks and compares their returns over 1.5 months to the NIFTY index. The stocks identified using SOM and technical indicators performed 37.14% better than the NIFTY index over that period. The document provides background on technical analysis indicators like RSI, MACD, and OBV that were used in the analysis. It also describes how SOM can be used to classify stocks based on technical indicator values and select stocks that closely match the properties of the best performing class.
This white paper proposes a methodology to dynamically diffuse stress across a credit rating scale while considering the performance of credit scores. The approach involves 5 steps:
1) Modeling the initial default rate curve using a Beta distribution.
2) Applying a stress impact to the average default rate.
3) Establishing a relationship between the pre-stress and post-stress curves.
4) Optimizing to find the post-stress (α,β) parameters of the Beta distribution.
5) Analyzing the relationship between the diffusion of stress by rating class and the Gini index, which measures score performance.
The methodology is demonstrated on a real SME loan portfolio, showing how the level
This document discusses using various artificial intelligence techniques like neural networks and fuzzy inference systems to predict the direction of stock prices for Microsoft and Intel over a 13 year period. It evaluates the performance of different models, including backpropagation neural networks, fuzzy inference systems using neural learning and genetic algorithms. The best models were able to correctly predict the direction of Microsoft stock prices 63% of the time, resulting in returns up to 103%. While prediction of Intel was more difficult, achieving the highest returns required selecting the stock with the best performing model.
The stock exchange is an important apparatus for economic growth as it is an opportunity for investors to acquire equity and, at the same time, provide resources for organizations expansions. On the other hand, a major concern regarding entering this market is related with the dynamic in which deals are made since the pricing of shares happens in a smart and oscillatory way. Due to this context, several researchers are studying techniques in order to predict the stock exchange, maximize profits and reduce risks. Thus, this study proposes a linear regression model for stock exchange prediction which, combined with financial indicators, provides support decision-making by investors.
Developing of decision support system for budget allocation of an r&d organiz...eSAT Publishing House
1) The document describes developing a decision support system for budget allocation of an R&D organization using a performance-based goal programming model.
2) It analyzes nine years of budget data from the organization and finds a wide gap between allocated funds and funds utilized.
3) The proposed model assesses R&D programs based on priority and risk factors using fuzzy set theory, and aims to allocate budgets in a more realistic and accurate manner than the previous approach.
IRJET- Prediction of Stock Market using Machine Learning AlgorithmsIRJET Journal
The document discusses predicting stock market prices using machine learning algorithms. It reviews past research applying algorithms like KNN, neural networks, ARIMA and random forest to stock price prediction. The paper aims to compare the performance of supervised learning algorithms like logistic regression, KNN and random forest on stock market datasets to determine the most accurate for predicting future prices. It reviews literature on the topic and discusses the methodology and algorithms that will be used to make predictions on datasets from five companies.
Indian Stock Market Using Machine Learning(Volume1, oct 2017)sk joshi
This document summarizes a research paper that uses machine learning and financial ratios to classify stocks traded on the Indian stock market as either "outperformers" or "underperformers" based on their rate of return. The study uses quarterly data from 50 large market capitalization companies over one year. A support vector machine model achieved 80% accuracy in predicting stock performance on a sector-by-sector basis. While promising, the author acknowledges limitations and outlines areas for further improvement, such as incorporating more external factors like macroeconomic data.
TOURISM DEMAND FORECASTING MODEL USING NEURAL NETWORKijcsit
Travel agencies should be able to judge the market demand for tourism to develop sales plans accordingly.However, many travel agencies lack the ability to judge the market demand for tourism, and thus make risky business decisions. Based on the above, this study applied the Artificial Neural Network combined with the Genetic Algorithm (GA) to establish a prediction model of air ticket sales revenue. GA was used to
determine the optimum number of input and hidden nodes of a feedforward neural network. The empirical results suggested that the mean absolute relative error(MARE) of the proposed hybrid model’s predicted value of air ticket sales revenue and the actual value was 10.51%and the correlation coefficient was 0.913. The proposed model had good predictive capability and could provide travel agency operators with reliable and highly efficient analysis data.
This document discusses using classification models to identify if a person can make $50,000 per year based on attributes like age, education, occupation, etc. It explores the data from the UCI Machine Learning Repository, cleaning missing values. Two models are created - a decision tree model and a K-nearest neighbors (KNN) model. The decision tree achieves an accuracy of around 85% and KNN also achieves over 80% accuracy. KNN is suggested to be better for this problem as it is not as sensitive to outliers as decision trees.
A model for profit pattern mining based on genetic algorithmeSAT Journals
Abstract
Mining profit oriented patterns is a novel technique of association rule mining in data mining, which basically focuses on important issues related with business. As it is well known that every business aims to generate the profit and find the ways to improve the same. In earlier days association rule mining was used for market basket analysis and targeted only some of the business and commercial aspects. Afterwards the researchers started to aim the most prominent element of any business i.e. Profit, and determined the innovative way to generate the association rules based on profit. Profit oriented patterns mining approach combines the statistic based pattern mining with value-based decision making to generate those patterns with the maximum profit and some ways to generate recommenders for future strategy. To achieve the desired goal the traditional association rule mining alone is not effectual, so we combine the strength of genetic algorithm with association rule mining to enhance its capability. The study shows that Genetic Algorithm improves the effectiveness and efficiency of association rule mining outcome, since genetic algorithms are competent to handle the problems related with the uncertainty, multi-dimensional, non-differential, non-continuous, and non-parametrical, non-linearity constraint and multi-objective optimization problems. In this paper we apply the concept of profit pattern mining with genetic algorithm to generate profit oriented pattern which help out in future business expansion and fulfill the business objective.
Keywords: Data Mining, Association Rule Mining, Profit Pattern Mining, Genetic Algorithm
Demand forecasting involves estimating future demand for a product or service. There are two main approaches: obtaining expert opinions or consumer surveys, or using past sales data through statistical techniques. Simple survey methods include expert opinion polls, the Delphi technique to reach consensus among experts, and consumer surveys using complete or sample enumeration. More complex statistical methods include time series analysis to identify trends, using leading economic indicators to forecast changes, and correlation/regression analysis to determine relationships between demand and influencing factors like price, income, and advertising. The most sophisticated method is simultaneous equation modeling or developing an econometric model of an entire economy.
1. The document discusses the relationship between trading volume, stock returns, and volatility based on an analysis of data from the Pakistan Stock Exchange from 2003-2013. It aims to understand how changes in these variables impact each other.
2. Previous research on the topic in developed markets found a positive relationship between trading volume, returns, and volatility, but little work has been done in Pakistan.
3. The study will analyze daily data from the KSE 100 index and 50 firms using ARCH and GARCH models to explore the explanatory power of past trading volume and returns on current market returns and volatility in Pakistan.
The document presents a project proposal for a comprehensive study on the application of technical indicators on the BSE index (SENSEX). The objectives are to measure market movements, benchmark fund performance, and provide guidelines to investors. The study will analyze SENSEX performance over the last five years using simple moving average, exponential moving average, and relative strength index indicators. The methodology will use secondary data and analytical research. Limitations include a limited time frame and the study considering only one variable. The expected contribution is providing knowledge on technical analysis to predict stock prices and investment decisions.
This document presents an estimated arbitrage-free model that jointly models nominal and real US Treasury yields. It estimates separate arbitrage-free Nelson-Siegel models for nominal and real yields, finding a three-factor model fits nominal yields well and a two-factor model fits real yields. It then estimates a four-factor joint model that fits both yield curves. The joint model is used to decompose breakeven inflation rates into inflation expectations and inflation risk premium components.
Now knowledge pre-processing, model and reasoning issues, power metrics, quality
issues, post-processing of discovered structures, visualization, and on-line change is best challenge.
In this paper Neural Network based forecasting of stock prices of selected sectors under Bombay
Stock Exchange show that neural networks have the power to predict prices albeit the volatility in the
markets[9]. The motivation for the development of neural network technology stemmed from the
desire to develop an artificial system that could perform “intelligent" tasks similar to those performed
by the human brain. Artificial Neural Networks are being counted as the wave of the future in
computing. They are indeed self-learning mechanisms which don’t require the traditional skills of a
programmer. Back propagation is one of the approaches to implement concept of neural networks.
Back propagation is a form of supervised learning for multi-layer nets. Error data at the output layer
is back propagated to earlier ones, allowing incoming weights to these layers to be updated. It is most
often used as training algorithm in current neural network applications. In this paper, we apply data
mining technology to stock market in order to research the trend of price; it aims to predict the future
trend of the stock market and the fluctuation of price. This paper points out the shortage that exists in
current traditional statistical analysis in the stock, then makes use of BP neural network algorithm to
predict the stock market by establishing a three-tier structure of the neural network, namely input
layer, hidden layer and output layer. Finally, we get a better predictive model to improve forecast
accuracy
IOSR Journal of Business and Management (IOSR-JBM) is an open access international journal that provides rapid publication (within a month) of articles in all areas of business and managemant and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications inbusiness and management. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
IOSR Journal of Electronics and Communication Engineering(IOSR-JECE) is an open access international journal that provides rapid publication (within a month) of articles in all areas of electronics and communication engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in electronics and communication engineering. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Development of Aquaponic System using Solar Powered Control PumpIOSR Journals
This document describes the development of an aquaponic system powered by a solar panel. The system uses a microcontroller to control a water pump and air pump based on input from the solar panel. It consists of a solar panel, inverter, water pump, air pump, battery, and microcontroller. The solar panel charges the battery during the day and powers the air pump at night. Experimental results showed the solar panel output a maximum of 18.49V at noon and the inverter had an efficiency of 65.55% when converting the solar panel's DC output to 240V AC to power the water pump. The overall system was designed to provide a low-cost and sustainable aquaponic food production method using solar energy
This document summarizes a research paper that analyzes the performance of adaptive equalization algorithms RLS and CMA for noisy speech signals. It finds that the RLS algorithm has a faster convergence rate but requires more computing power, while the CMA algorithm has a slower convergence rate but requires less computing power and performs relatively better. The parameters of an adaptive equalizer combining these algorithms with a noisy audio source are optimized in simulations. The results show that CMA has a better frequency response and MSE convergence than RLS in the presence of noisy audio. Therefore, blind equalization using CMA is concluded to perform better than trained equalization with RLS for noisy speech signals.
Dynamic AI-Geo Health Application based on BIGIS-DSS ApproachIOSR Journals
This document describes a proposed approach called Dynamic AI-GeoHealth Application based on BIGIS-DSS that integrates business intelligence, geographic information systems, and predictive analytics for health decision making. The approach was tested in Egypt's Ministry of Health and compared to an existing OLAP-GIS approach. Results found that the BIGIS-DSS approach enabled tasks to be completed faster and with better performance by allowing complex spatial and temporal queries, predictive modeling, and advanced analytics across multiple data dimensions. The BIGIS-DSS approach provides a more complete solution for health sector analysis and decision support.
Corporate bankruptcy prediction using Deep learning techniquesShantanu Deshpande
This document proposes using deep learning techniques like LSTM neural networks to predict corporate bankruptcy by integrating both financial ratio data and textual disclosures from annual reports. It notes that previous studies have largely relied on statistical models or used only financial data with machine learning. The researcher aims to determine if adding textual data to an LSTM model improves prediction performance over a CNN model using only financial ratios. The document outlines the research question, objectives, and provides an overview of previous bankruptcy prediction studies using statistical, machine learning and deep learning methods.
This document discusses using artificial neural networks (ANNs) to enhance stock picking and investment strategies by incorporating earnings forecasts from financial analysts. It aims to compare different ANN models and identify the best model for forecasting stock prices and improving investment profitability. The study uses quarterly data on stock prices, indexes, analyst earnings forecasts and recommendations from 1997-2003 to train and evaluate ANN models. It finds that ANN strategies based on analyst forecasts achieved higher returns than other investment strategies over this period.
Gain Comparison between NIFTY and Selected Stocks identified by SOM using Tec...IOSR Journals
This document discusses a study that uses self-organizing maps (SOM) and technical indicators to identify stocks with potential for investment gains. The study selects stocks and compares their returns over 1.5 months to the NIFTY index. The stocks identified using SOM and technical indicators performed 37.14% better than the NIFTY index over that period. The document provides background on technical analysis indicators like RSI, MACD, and OBV that were used in the analysis. It also describes how SOM can be used to classify stocks based on technical indicator values and select stocks that closely match the properties of the best performing class.
This white paper proposes a methodology to dynamically diffuse stress across a credit rating scale while considering the performance of credit scores. The approach involves 5 steps:
1) Modeling the initial default rate curve using a Beta distribution.
2) Applying a stress impact to the average default rate.
3) Establishing a relationship between the pre-stress and post-stress curves.
4) Optimizing to find the post-stress (α,β) parameters of the Beta distribution.
5) Analyzing the relationship between the diffusion of stress by rating class and the Gini index, which measures score performance.
The methodology is demonstrated on a real SME loan portfolio, showing how the level
This document discusses using various artificial intelligence techniques like neural networks and fuzzy inference systems to predict the direction of stock prices for Microsoft and Intel over a 13 year period. It evaluates the performance of different models, including backpropagation neural networks, fuzzy inference systems using neural learning and genetic algorithms. The best models were able to correctly predict the direction of Microsoft stock prices 63% of the time, resulting in returns up to 103%. While prediction of Intel was more difficult, achieving the highest returns required selecting the stock with the best performing model.
The stock exchange is an important apparatus for economic growth as it is an opportunity for investors to acquire equity and, at the same time, provide resources for organizations expansions. On the other hand, a major concern regarding entering this market is related with the dynamic in which deals are made since the pricing of shares happens in a smart and oscillatory way. Due to this context, several researchers are studying techniques in order to predict the stock exchange, maximize profits and reduce risks. Thus, this study proposes a linear regression model for stock exchange prediction which, combined with financial indicators, provides support decision-making by investors.
Developing of decision support system for budget allocation of an r&d organiz...eSAT Publishing House
1) The document describes developing a decision support system for budget allocation of an R&D organization using a performance-based goal programming model.
2) It analyzes nine years of budget data from the organization and finds a wide gap between allocated funds and funds utilized.
3) The proposed model assesses R&D programs based on priority and risk factors using fuzzy set theory, and aims to allocate budgets in a more realistic and accurate manner than the previous approach.
IRJET- Prediction of Stock Market using Machine Learning AlgorithmsIRJET Journal
The document discusses predicting stock market prices using machine learning algorithms. It reviews past research applying algorithms like KNN, neural networks, ARIMA and random forest to stock price prediction. The paper aims to compare the performance of supervised learning algorithms like logistic regression, KNN and random forest on stock market datasets to determine the most accurate for predicting future prices. It reviews literature on the topic and discusses the methodology and algorithms that will be used to make predictions on datasets from five companies.
Indian Stock Market Using Machine Learning(Volume1, oct 2017)sk joshi
This document summarizes a research paper that uses machine learning and financial ratios to classify stocks traded on the Indian stock market as either "outperformers" or "underperformers" based on their rate of return. The study uses quarterly data from 50 large market capitalization companies over one year. A support vector machine model achieved 80% accuracy in predicting stock performance on a sector-by-sector basis. While promising, the author acknowledges limitations and outlines areas for further improvement, such as incorporating more external factors like macroeconomic data.
TOURISM DEMAND FORECASTING MODEL USING NEURAL NETWORKijcsit
Travel agencies should be able to judge the market demand for tourism to develop sales plans accordingly.However, many travel agencies lack the ability to judge the market demand for tourism, and thus make risky business decisions. Based on the above, this study applied the Artificial Neural Network combined with the Genetic Algorithm (GA) to establish a prediction model of air ticket sales revenue. GA was used to
determine the optimum number of input and hidden nodes of a feedforward neural network. The empirical results suggested that the mean absolute relative error(MARE) of the proposed hybrid model’s predicted value of air ticket sales revenue and the actual value was 10.51%and the correlation coefficient was 0.913. The proposed model had good predictive capability and could provide travel agency operators with reliable and highly efficient analysis data.
This document discusses using classification models to identify if a person can make $50,000 per year based on attributes like age, education, occupation, etc. It explores the data from the UCI Machine Learning Repository, cleaning missing values. Two models are created - a decision tree model and a K-nearest neighbors (KNN) model. The decision tree achieves an accuracy of around 85% and KNN also achieves over 80% accuracy. KNN is suggested to be better for this problem as it is not as sensitive to outliers as decision trees.
A model for profit pattern mining based on genetic algorithmeSAT Journals
Abstract
Mining profit oriented patterns is a novel technique of association rule mining in data mining, which basically focuses on important issues related with business. As it is well known that every business aims to generate the profit and find the ways to improve the same. In earlier days association rule mining was used for market basket analysis and targeted only some of the business and commercial aspects. Afterwards the researchers started to aim the most prominent element of any business i.e. Profit, and determined the innovative way to generate the association rules based on profit. Profit oriented patterns mining approach combines the statistic based pattern mining with value-based decision making to generate those patterns with the maximum profit and some ways to generate recommenders for future strategy. To achieve the desired goal the traditional association rule mining alone is not effectual, so we combine the strength of genetic algorithm with association rule mining to enhance its capability. The study shows that Genetic Algorithm improves the effectiveness and efficiency of association rule mining outcome, since genetic algorithms are competent to handle the problems related with the uncertainty, multi-dimensional, non-differential, non-continuous, and non-parametrical, non-linearity constraint and multi-objective optimization problems. In this paper we apply the concept of profit pattern mining with genetic algorithm to generate profit oriented pattern which help out in future business expansion and fulfill the business objective.
Keywords: Data Mining, Association Rule Mining, Profit Pattern Mining, Genetic Algorithm
Demand forecasting involves estimating future demand for a product or service. There are two main approaches: obtaining expert opinions or consumer surveys, or using past sales data through statistical techniques. Simple survey methods include expert opinion polls, the Delphi technique to reach consensus among experts, and consumer surveys using complete or sample enumeration. More complex statistical methods include time series analysis to identify trends, using leading economic indicators to forecast changes, and correlation/regression analysis to determine relationships between demand and influencing factors like price, income, and advertising. The most sophisticated method is simultaneous equation modeling or developing an econometric model of an entire economy.
1. The document discusses the relationship between trading volume, stock returns, and volatility based on an analysis of data from the Pakistan Stock Exchange from 2003-2013. It aims to understand how changes in these variables impact each other.
2. Previous research on the topic in developed markets found a positive relationship between trading volume, returns, and volatility, but little work has been done in Pakistan.
3. The study will analyze daily data from the KSE 100 index and 50 firms using ARCH and GARCH models to explore the explanatory power of past trading volume and returns on current market returns and volatility in Pakistan.
The document presents a project proposal for a comprehensive study on the application of technical indicators on the BSE index (SENSEX). The objectives are to measure market movements, benchmark fund performance, and provide guidelines to investors. The study will analyze SENSEX performance over the last five years using simple moving average, exponential moving average, and relative strength index indicators. The methodology will use secondary data and analytical research. Limitations include a limited time frame and the study considering only one variable. The expected contribution is providing knowledge on technical analysis to predict stock prices and investment decisions.
This document presents an estimated arbitrage-free model that jointly models nominal and real US Treasury yields. It estimates separate arbitrage-free Nelson-Siegel models for nominal and real yields, finding a three-factor model fits nominal yields well and a two-factor model fits real yields. It then estimates a four-factor joint model that fits both yield curves. The joint model is used to decompose breakeven inflation rates into inflation expectations and inflation risk premium components.
Now knowledge pre-processing, model and reasoning issues, power metrics, quality
issues, post-processing of discovered structures, visualization, and on-line change is best challenge.
In this paper Neural Network based forecasting of stock prices of selected sectors under Bombay
Stock Exchange show that neural networks have the power to predict prices albeit the volatility in the
markets[9]. The motivation for the development of neural network technology stemmed from the
desire to develop an artificial system that could perform “intelligent" tasks similar to those performed
by the human brain. Artificial Neural Networks are being counted as the wave of the future in
computing. They are indeed self-learning mechanisms which don’t require the traditional skills of a
programmer. Back propagation is one of the approaches to implement concept of neural networks.
Back propagation is a form of supervised learning for multi-layer nets. Error data at the output layer
is back propagated to earlier ones, allowing incoming weights to these layers to be updated. It is most
often used as training algorithm in current neural network applications. In this paper, we apply data
mining technology to stock market in order to research the trend of price; it aims to predict the future
trend of the stock market and the fluctuation of price. This paper points out the shortage that exists in
current traditional statistical analysis in the stock, then makes use of BP neural network algorithm to
predict the stock market by establishing a three-tier structure of the neural network, namely input
layer, hidden layer and output layer. Finally, we get a better predictive model to improve forecast
accuracy
IOSR Journal of Business and Management (IOSR-JBM) is an open access international journal that provides rapid publication (within a month) of articles in all areas of business and managemant and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications inbusiness and management. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
IOSR Journal of Electronics and Communication Engineering(IOSR-JECE) is an open access international journal that provides rapid publication (within a month) of articles in all areas of electronics and communication engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in electronics and communication engineering. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Development of Aquaponic System using Solar Powered Control PumpIOSR Journals
This document describes the development of an aquaponic system powered by a solar panel. The system uses a microcontroller to control a water pump and air pump based on input from the solar panel. It consists of a solar panel, inverter, water pump, air pump, battery, and microcontroller. The solar panel charges the battery during the day and powers the air pump at night. Experimental results showed the solar panel output a maximum of 18.49V at noon and the inverter had an efficiency of 65.55% when converting the solar panel's DC output to 240V AC to power the water pump. The overall system was designed to provide a low-cost and sustainable aquaponic food production method using solar energy
This document summarizes a research paper that analyzes the performance of adaptive equalization algorithms RLS and CMA for noisy speech signals. It finds that the RLS algorithm has a faster convergence rate but requires more computing power, while the CMA algorithm has a slower convergence rate but requires less computing power and performs relatively better. The parameters of an adaptive equalizer combining these algorithms with a noisy audio source are optimized in simulations. The results show that CMA has a better frequency response and MSE convergence than RLS in the presence of noisy audio. Therefore, blind equalization using CMA is concluded to perform better than trained equalization with RLS for noisy speech signals.
Dynamic AI-Geo Health Application based on BIGIS-DSS ApproachIOSR Journals
This document describes a proposed approach called Dynamic AI-GeoHealth Application based on BIGIS-DSS that integrates business intelligence, geographic information systems, and predictive analytics for health decision making. The approach was tested in Egypt's Ministry of Health and compared to an existing OLAP-GIS approach. Results found that the BIGIS-DSS approach enabled tasks to be completed faster and with better performance by allowing complex spatial and temporal queries, predictive modeling, and advanced analytics across multiple data dimensions. The BIGIS-DSS approach provides a more complete solution for health sector analysis and decision support.
This document compares and evaluates several algorithms for mining association rules from frequent itemsets in transactional databases. It summarizes the Apriori, FP-Growth, Closure and MaxClosure algorithms, and experimentally compares their performance based on factors like number of transactions, minimum support, and execution time. The paper finds that algorithms like FP-Growth that avoid candidate generation perform better than Apriori, which generates a large number of candidate itemsets and requires multiple database scans.
IOSR Journal of Business and Management (IOSR-JBM) is an open access international journal that provides rapid publication (within a month) of articles in all areas of business and managemant and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications inbusiness and management. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
MODELING THE AUTOREGRESSIVE CAPITAL ASSET PRICING MODEL FOR TOP 10 SELECTED...IAEME Publication
Systematic risk is the uncertainty inherent to the entire market or entire market segment and Unsystematic risk is the type of uncertainty that comes with the company or industry we invest. It can be reduced through diversification. The study generalized for selecting of non -linear capital asset pricing model for top securities in BSE and made an attempt to identify the marketable and non-marketable risk of investors of top companies. The analysis was conducted at different stages. They are Vector auto regression of systematic and unsystematic risk.
Effects of Option Characteristics and Underlying Stock on Option BetaDharma Bagoes Oka
Beta (β) is one of the risk management tools to capture the risk exposures of hedge-fund investments. As most of hedge funds today trade derivative securities, the research on the measurement of derivative beta is important. The aim of this paper is to examine the factors, which may have impacts on option beta in the United States market. My hypothesis is comprised into three main parts. First, I hypothesize that 5 variables (type of option, strike price, days to maturity, firm size and book to market ratio) have linear relationship with the option beta. Second, I hypothesize that the strength of linear relationship is varied by the type of the industry. Third, I hypothesize that the strength of linear relationship is also varied by these 5 types of variables itself. To begin the process, I use regression method to estimate the beta of underlying stock. Then, I estimate the option beta by multiplying the beta of underlying stock and the option elasticity. I then use regression method to test whether the 5 variables have linear relationship with option beta. I find that 3 variables (type of option, strike price and days to maturity) have the most significant linear relationship with option beta, while firm size has less significant linear relationship and book to market ratio have no significant linear relationship. Furthermore, using 2-way ANOVA, I test whether strength of linear relationship is varied by the type of the industry and the 5 types of variables. There is not enough evidence to infer that the strength of linear relationship between the 5 variables to option beta is varied by the type of the industry, instead, there is enough evidence to infer that the strength of linear relationship between the 5 variables to option beta is varied by the type of variables.
Effects of option characteristics and underlying stock on option beta articleDharma Bagoes Oka
Beta (β) is one of the risk management tools to capture the risk exposures of hedge-fund investments. As most of hedge funds today trade derivative securities, the research on the measurement of derivative beta is important. The aim of this paper is to examine the factors, which may have impacts on option beta in the United States market. My hypothesis is comprised into three main parts. First, I hypothesize that 5 variables (type of option, strike price, days to maturity, firm size and book to market ratio) have linear relationship with the option beta. Second, I hypothesize that the strength of linear relationship is varied by the type of the industry. Third, I hypothesize that the strength of linear relationship is also varied by these 5 types of variables itself. To begin the process, I use regression method to estimate the beta of underlying stock. Then, I estimate the option beta by multiplying the beta of underlying stock and the option elasticity. I then use regression method to test whether the 5 variables have linear relationship with option beta. I find that 3 variables (type of option, strike price and days to maturity) have the most significant linear relationship with option beta, while firm size has less significant linear relationship and book to market ratio have no significant linear relationship. Furthermore, using 2-way ANOVA, I test whether strength of linear relationship is varied by the type of the industry and the 5 types of variables. There is not enough evidence to infer that the strength of linear relationship between the 5 variables to option beta is varied by the type of the industry, instead, there is enough evidence to infer that the strength of linear relationship between the 5 variables to option beta is varied by the type of variables.
Effects of option characteristics and underlying stock on option beta articleDharma Bagoes Oka
Beta (β) is one of the risk management tools to capture the risk exposures of hedge-fund investments. As most of hedge funds today trade derivative securities, the research on the measurement of derivative beta is important. The aim of this paper is to examine the factors, which may have impacts on option beta in the United States market. My hypothesis is comprised into three main parts. First, I hypothesize that 5 variables (type of option, strike price, days to maturity, firm size and book to market ratio) have linear relationship with the option beta. Second, I hypothesize that the strength of linear relationship is varied by the type of the industry. Third, I hypothesize that the strength of linear relationship is also varied by these 5 types of variables itself. To begin the process, I use regression method to estimate the beta of underlying stock. Then, I estimate the option beta by multiplying the beta of underlying stock and the option elasticity. I then use regression method to test whether the 5 variables have linear relationship with option beta. I find that 3 variables (type of option, strike price and days to maturity) have the most significant linear relationship with option beta, while firm size has less significant linear relationship and book to market ratio have no significant linear relationship. Furthermore, using 2-way ANOVA, I test whether strength of linear relationship is varied by the type of the industry and the 5 types of variables. There is not enough evidence to infer that the strength of linear relationship between the 5 variables to option beta is varied by the type of the industry, instead, there is enough evidence to infer that the strength of linear relationship between the 5 variables to option beta is varied by the type of variables.
Does the capital assets pricing model (capm) predicts stock market returns in...Alexander Decker
This document examines whether the Capital Asset Pricing Model (CAPM) can predict stock returns in Ghana using data from selected stocks on the Ghana Stock Exchange from 2006-2010. The results found no statistically significant relationship between actual and predicted returns, indicating CAPM with constant beta cannot explain differences in returns. It was also found that some stocks were on average undervalued while one was overvalued over the period studied. The conclusion is that the standard CAPM model cannot statistically explain the observed differences in actual and estimated returns of the selected Ghanaian stocks.
This document presents a system for predicting corporate bankruptcy using textual disclosures from SEC filings. It discusses how previous studies have used financial ratios and market data to predict bankruptcy, but that textual disclosures also provide important unstructured qualitative information. The proposed system uses natural language processing and machine learning algorithms to extract features from 10-K and 10-Q filings and predict bankruptcy with high accuracy, even before the final bankruptcy occurs. It aims to improve on previous bankruptcy prediction methods by incorporating both financial and textual data sources.
This document discusses using institutional ownership data to predict future stock returns. It analyzes institutional ownership data from the US between 2004-2014 using machine learning algorithms. A support vector machine was able to classify stocks into 3 bins of future 4-quarter returns with 37.3% accuracy, significantly better than chance. The study aims to identify which institutional ownership features best predict returns and whether their combination improves predictions over individual features.
Financial Time Series How Predictable are theyijtsrd
This document summarizes a study that investigated the predictability of stock market returns using entropy rates. The study analyzed daily closing price data from four major stock market indices: Nasdaq 100, S&P 500, SSE and SZSE 500. It calculated the entropy rates of returns and found they were all less than the theoretical maximum, indicating returns are not completely random but not highly predictable either. The study estimated the theoretical maximum predictability of returns could reach 66% by properly accounting for the logarithm base used in entropy calculations.
Forecasting Stocks with Multivariate Time Series Models.inventionjournals
This work seeks to forecast stocks of the Nigerian banking sector using probability multivariate time series models. The study involved the stocks from six different banks that were found to be analytically interrelated. Stationarity of the six series were obtained by differencing. Model selection criteria were employed and the best fitted model was selected to be a vector autoregressive model of order 1. The model was subjected to diagnostic checks and was found to be adequate. Consequently, forecasts of stocks were generated for the next two years.
This document introduces the Two Sigma Factor Lens, which is a framework for constructing a parsimonious set of risk factors that individually describe independent risks across many asset classes yet collectively explain much of the risk in typical institutional investor portfolios. The lens is intended to capture the majority of risk in a holistic yet concise manner so that changes to factor exposures can easily translate to asset allocation changes. The document discusses how analyzing portfolios through a risk factor lens allows investors to better understand overlapping risk sources across asset classes and more efficiently manage portfolio risk.
An Empirical Assessment of Capital Asset Pricing Model with Reference to Nati...ijtsrd
"This study concentrates on empirical assessment of Capital Asset Pricing Model CAPM on the National Stock Exchange NSE . CAPM assists to determine a well diversified portfolio. The main objective of this research paper is to check the applicability of Nobel laureate’s model in Indian equity market by testing the relationship between risk and return, whether there is any direct proportionality in the expected rate of return and its systematic risk. It relates its results by using the beta systematic risk as a measuring factor. The study was being conducted for a period of 260 weeks from 7 April 2013 to 25 March 2018. 45 companies from NSE were picked as a proxy for the market portfolio. This research was done by using regression analysis on stocks and portfolio to find out the final results. Research of this study nullifies that this model is applicable to the Indian market and also contradicts its expected return and systematic risk which are linearly related to each other. Miss. Yashashri Shinde | Miss. Teja Mane ""An Empirical Assessment of Capital Asset Pricing Model with Reference to National Stock Exchange"" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Special Issue | Fostering Innovation, Integration and Inclusion Through Interdisciplinary Practices in Management , March 2019, URL: https://www.ijtsrd.com/papers/ijtsrd23105.pdf
Paper URL: https://www.ijtsrd.com/management/public-sector-management/23105/an-empirical-assessment-of-capital-asset-pricing-model-with-reference-to-national-stock-exchange/miss-yashashri-shinde"
The document discusses several articles related to security analysis and portfolio management. It summarizes Adam.Y.C.Lei & Huihua.li's experience using Bloomberg terminals in a finance course to teach students security analysis and portfolio management. It also summarizes Ali Asghary Karahroudy's analysis of security issues in cloud computing and a proposed file distribution model to address those issues. Finally, it briefly mentions several other articles related to composable security analysis, using chance constrained programming for security analysis and investment decisions, the definition of a project portfolio, criticisms of the Capital Asset Pricing Model, and estimating the value of security analysis and market timing ability.
What is wrong with the quantitative standards for market riskAlexander Decker
This document evaluates the quantitative standards laid out in the Basel Accords for implementing internal market risk models. It finds that some standards may not accurately reflect research findings. For example, the standards do not specify a VaR method despite evidence that volatility is time-varying and returns are fat-tailed. Additionally, requiring a minimum historical period runs contrary to evidence of clustered volatility. Several standards effectively smooth the market risk charge over time in ways that make it unresponsive. Overall, the document argues that some quantitative standards could be improved by better aligning with available research findings.
Investment Portfolio Risk Manager using Machine Learning and Deep-Learning.IRJET Journal
This document discusses building a system for predicting portfolio risk using machine learning and deep learning models. It reviews several related works that use techniques like decision trees, genetic algorithms, fuzzy logic, neural networks and sentiment analysis to predict stock performance from historical data and news/social media. The proposed work aims to take various inputs like news, tweets, foreign institutional investor data and historical stock prices to train models that can provide insights on portfolio risk and how stocks may perform. It will use long short-term memory networks with sentiment analysis and compare portfolios and historical data to make accurate predictions.
This document analyzes the dual beta model for stocks listed on the Karachi Stock Exchange between 1997-2007. It finds that beta, a measure of risk, varies between bull and bear markets for some stocks. Specifically, the beta was higher in bear markets than bull markets for 9 of the 15 stocks analyzed, while the reverse was true for the other 6 stocks. The analysis uses a dual beta model that estimates separate betas for bull and bear periods to test if the betas are statistically different between the two market conditions. The results provide some evidence that beta varies depending on whether the overall market is rising or falling.
Stock Market Prediction using Machine Learningijtsrd
Stock market prediction is a typical task to forecast the upcoming stock values. It is very difficult to forecast because of unbalanced nature of stocks. In this work, an attempt is made for prediction of stock market trend. This research aims to combine multiple existing techniques into a much more robust prediction model which can handle various scenarios in which investment can be beneficial. By combing both techniques, this prediction model can provide more accurate and flexible recommendations. However instead of using those traditional methods, we approached the problems using machine learning techniques. We tried to revolutionize the way people address data processing problems in stock market by predicting the behavior of the stocks. In fact, if we can predict how the stock will behave in the short term future we can queue up our transactions earlier and be faster than everyone else. In theory, this allows us to maximize our profit without having the need to be physically located close to the data sources. We examined three main models. Firstly we used a complete prediction using a moving average. Secondly we used a LSTM model and finally a model called ARIMA model. The only motive is to increase the accuracy of predictive the stock market price. Each of those models was applied on real stock market data and checked whether it could return profit. Subham Kumar Gupta | Dr. Bhuvana J | Dr. M N Nachappa "Stock Market Prediction using Machine Learning" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-6 | Issue-3 , April 2022, URL: https://www.ijtsrd.com/papers/ijtsrd49868.pdf Paper URL: https://www.ijtsrd.com/computer-science/other/49868/stock-market-prediction-using-machine-learning/subham-kumar-gupta
STRESS TESTING IN BANKING SECTOR FRAMEWORKDinabandhu Bag
This document summarizes a study analyzing default correlation in retail banking portfolios in India. It discusses:
1) Literature on default correlation and factor modeling approaches to estimate correlation. Previous studies found correlation varies over time and across industries/ratings.
2) Analysis of a test portfolio with 4 retail segments showing migration of exposures between segments over 14 months. Segments showed varying default rate trends over time.
3) The study builds a multi-factor linear model to test if external economic factors significantly impact default correlations between segments over time.
Coursework- Soton (Single Index Model and CAPM)Ece Akbulut
This document provides an introduction to portfolio management and the single-index model (SIM) and capital asset pricing model (CAPM). It first describes SIM and tests it on eight companies, analyzing the results. It finds SIM reduces the workload of estimating variables but oversimplifies risk. The document then introduces CAPM, tests it on the same companies, and analyzes the results and merits and demerits of CAPM. It concludes by discussing the sensitivity of the models to sector characteristics.
Forecasting Economic Activity using Asset PricesPanos Kouvelis
This dissertation evaluates how well the asset prices and, in particular the term spread, the short rate and the real stock returns, forecast the GDP growth and the Industrial Production. The study is applied with data of seven countries (Canada, France, Germany, Italy, Japan, United Kingdom and United States) and it covers a period of time between 1966 until now. The research finds that the asset prices have forecasting power for one quarter/month but they lose their power when the forecasting horizon increases. Moreover, the paper evaluates that the real stock return is the best predictor of the GDP growth and that the short rate has more predictive content than the term spread.
Keywords: Term spread, short rate, stock returns, output growth, forecasting horizon, out-of-sample statistics
Impact and Implications of Operations Research in Stock Marketinventionjournals
The motivation of this article is to advocate the administrative routine of settling on choices construct in light of instinct, as well as instinct combined with quantitative investigation. Operations Research (OR) is one of the main administrative choice science instruments utilized by benefit and charitable, for example, stock market. Gauging stock return is an important financial subject that has attracted researchers' consideration for a long time. It includes a supposition that basic data openly accessible in the past has some prescient connections to the future stock returns. This review tries to help the financial specialists in the stock market to choose the better planning for purchasing or offering stocks based on the information extricated from the chronicled costs of such stocks. The choice taken will be founded on choice tree classifier which is one of the Operations Research techniques.
This document provides a technical review of secure banking using RSA and AES encryption methodologies. It discusses how RSA and AES are commonly used encryption standards for secure data transmission between ATMs and bank servers. The document first provides background on ATM security measures and risks of attacks. It then reviews related work analyzing encryption techniques. The document proposes using a one-time password in addition to a PIN for ATM authentication. It concludes that implementing encryption standards like RSA and AES can make transactions more secure and build trust in online banking.
This document analyzes the performance of various modulation schemes for achieving energy efficient communication over fading channels in wireless sensor networks. It finds that for long transmission distances, low-order modulations like BPSK are optimal due to their lower SNR requirements. However, as transmission distance decreases, higher-order modulations like 16-QAM and 64-QAM become more optimal since they can transmit more bits per symbol, outweighing their higher SNR needs. Simulations show lifetime extensions up to 550% are possible in short-range networks by using higher-order modulations instead of just BPSK. The optimal modulation depends on transmission distance and balancing the energy used by electronic components versus power amplifiers.
This document provides a review of mobility management techniques in vehicular ad hoc networks (VANETs). It discusses three modes of communication in VANETs: vehicle-to-infrastructure (V2I), vehicle-to-vehicle (V2V), and hybrid vehicle (HV) communication. For each communication mode, different mobility management schemes are required due to their unique characteristics. The document also discusses mobility management challenges in VANETs and outlines some open research issues in improving mobility management for seamless communication in these dynamic networks.
This document provides a review of different techniques for segmenting brain MRI images to detect tumors. It compares the K-means and Fuzzy C-means clustering algorithms. K-means is an exclusive clustering algorithm that groups data points into distinct clusters, while Fuzzy C-means is an overlapping clustering algorithm that allows data points to belong to multiple clusters. The document finds that Fuzzy C-means requires more time for brain tumor detection compared to other methods like hierarchical clustering or K-means. It also reviews related work applying these clustering algorithms to segment brain MRI images.
1) The document simulates and compares the performance of AODV and DSDV routing protocols in a mobile ad hoc network under three conditions: when users are fixed, when users move towards the base station, and when users move away from the base station.
2) The results show that both protocols have higher packet delivery and lower packet loss when users are either fixed or moving towards the base station, since signal strength is better in those scenarios. Performance degrades when users move away from the base station due to weaker signals.
3) AODV generally has better performance than DSDV, with higher throughput and packet delivery rates observed across the different user mobility conditions.
This document describes the design and implementation of 4-bit QPSK and 256-bit QAM modulation techniques using MATLAB. It compares the two techniques based on SNR, BER, and efficiency. The key steps of implementing each technique in MATLAB are outlined, including generating random bits, modulation, adding noise, and measuring BER. Simulation results show scatter plots and eye diagrams of the modulated signals. A table compares the results, showing that 256-bit QAM provides better performance than 4-bit QPSK. The document concludes that QAM modulation is more effective for digital transmission systems.
The document proposes a hybrid technique using Anisotropic Scale Invariant Feature Transform (A-SIFT) and Robust Ensemble Support Vector Machine (RESVM) to accurately identify faces in images. A-SIFT improves upon traditional SIFT by applying anisotropic scaling to extract richer directional keypoints. Keypoints are processed with RESVM and hypothesis testing to increase accuracy above 95% by repeatedly reprocessing images until the threshold is met. The technique was tested on similar and different facial images and achieved better results than SIFT in retrieval time and reduced keypoints.
This document studies the effects of dielectric superstrate thickness on microstrip patch antenna parameters. Three types of probes-fed patch antennas (rectangular, circular, and square) were designed to operate at 2.4 GHz using Arlondiclad 880 substrate. The antennas were tested with and without an Arlondiclad 880 superstrate of varying thicknesses. It was found that adding a superstrate slightly degraded performance by lowering the resonant frequency and increasing return loss and VSWR, while decreasing bandwidth and gain. Specifically, increasing the superstrate thickness or dielectric constant resulted in greater changes to the antenna parameters.
This document describes a wireless environment monitoring system that utilizes soil energy as a sustainable power source for wireless sensors. The system uses a microbial fuel cell to generate electricity from the microbial activity in soil. Two microbial fuel cells were created using different soil types and various additives to produce different current and voltage outputs. An electronic circuit was designed on a printed circuit board with components like a microcontroller and ZigBee transceiver. Sensors for temperature and humidity were connected to the circuit to monitor the environment wirelessly. The system provides a low-cost way to power remote sensors without needing battery replacement and avoids the high costs of wiring a power source.
1) The document proposes a model for a frequency tunable inverted-F antenna that uses ferrite material.
2) The resonant frequency of the antenna can be significantly shifted from 2.41GHz to 3.15GHz, a 31% shift, by increasing the static magnetic field placed on the ferrite material.
3) Altering the permeability of the ferrite allows tuning of the antenna's resonant frequency without changing the physical dimensions, providing flexibility to operate over a wide frequency range.
This document summarizes a research paper that presents a speech enhancement method using stationary wavelet transform. The method first classifies speech into voiced, unvoiced, and silence regions based on short-time energy. It then applies different thresholding techniques to the wavelet coefficients of each region - modified hard thresholding for voiced speech, semi-soft thresholding for unvoiced speech, and setting coefficients to zero for silence. Experimental results using speech from the TIMIT database corrupted with white Gaussian noise at various SNR levels show improved performance over other popular denoising methods.
This document reviews the design of an energy-optimized wireless sensor node that encrypts data for transmission. It discusses how sensing schemes that group nodes into clusters and transmit aggregated data can reduce energy consumption compared to individual node transmissions. The proposed node design calculates the minimum transmission power needed based on received signal strength and uses a periodic sleep/wake cycle to optimize energy when not sensing or transmitting. It aims to encrypt data at both the node and network level to further optimize energy usage for wireless communication.
This document discusses group consumption modes. It analyzes factors that impact group consumption, including external environmental factors like technological developments enabling new forms of online and offline interactions, as well as internal motivational factors at both the group and individual level. The document then proposes that group consumption modes can be divided into four types based on two dimensions: vertical (group relationship intensity) and horizontal (consumption action period). These four types are instrument-oriented, information-oriented, enjoyment-oriented, and relationship-oriented consumption modes. Finally, the document notes that consumption modes are dynamic and can evolve over time.
The document summarizes a study of different microstrip patch antenna configurations with slotted ground planes. Three antenna designs were proposed and their performance evaluated through simulation: a conventional square patch, an elliptical patch, and a star-shaped patch. All antennas were mounted on an FR4 substrate. The effects of adding different slot patterns to the ground plane on resonance frequency, bandwidth, gain and efficiency were analyzed parametrically. Key findings were that reshaping the patch and adding slots increased bandwidth and shifted resonance frequency. The elliptical and star patches in particular performed better than the conventional design. Three antenna configurations were selected for fabrication and measurement based on the simulations: a conventional patch with a slot under the patch, an elliptical patch with slots
1) The document describes a study conducted to improve call drop rates in a GSM network through RF optimization.
2) Drive testing was performed before and after optimization using TEMS software to record network parameters like RxLevel, RxQuality, and events.
3) Analysis found call drops were occurring due to issues like handover failures between sectors, interference from adjacent channels, and overshooting due to antenna tilt.
4) Corrective actions taken included defining neighbors between sectors, adjusting frequencies to reduce interference, and lowering the mechanical tilt of an antenna.
5) Post-optimization drive testing showed improvements in RxLevel, RxQuality, and a reduction in dropped calls.
This document describes the design of an intelligent autonomous wheeled robot that uses RF transmission for communication. The robot has two modes - automatic mode where it can make its own decisions, and user control mode where a user can control it remotely. It is designed using a microcontroller and can perform tasks like object recognition using computer vision and color detection in MATLAB, as well as wall painting using pneumatic systems. The robot's movement is controlled by DC motors and it uses sensors like ultrasonic sensors and gas sensors to navigate autonomously. RF transmission allows communication between the robot and a remote control unit. The overall aim is to develop a low-cost robotic system for industrial applications like material handling.
This document reviews cryptography techniques to secure the Ad-hoc On-Demand Distance Vector (AODV) routing protocol in mobile ad-hoc networks. It discusses various types of attacks on AODV like impersonation, denial of service, eavesdropping, black hole attacks, wormhole attacks, and Sybil attacks. It then proposes using the RC6 cryptography algorithm to secure AODV by encrypting data packets and detecting and removing malicious nodes launching black hole attacks. Simulation results show that after applying RC6, the packet delivery ratio and throughput of AODV increase while delay decreases, improving the security and performance of the network under attack.
The document describes a proposed modification to the conventional Booth multiplier that aims to increase its speed by applying concepts from Vedic mathematics. Specifically, it utilizes the Urdhva Tiryakbhyam formula to generate all partial products concurrently rather than sequentially. The proposed 8x8 bit multiplier was coded in VHDL, simulated, and found to have a path delay 44.35% lower than a conventional Booth multiplier, demonstrating its potential for higher speed.
This document discusses image deblurring techniques. It begins by introducing image restoration and focusing on image deblurring. It then discusses challenges with image deblurring being an ill-posed problem. It reviews existing approaches to screen image deconvolution including estimating point spread functions and iteratively estimating blur kernels and sharp images. The document also discusses handling spatially variant blur and summarizes the relationship between the proposed method and previous work for different blur types. It proposes using color filters in the aperture to exploit parallax cues for segmentation and blur estimation. Finally, it proposes moving the image sensor circularly during exposure to prevent high frequency attenuation from motion blur.
This document describes modeling an adaptive controller for an aircraft roll control system using PID, fuzzy-PID, and genetic algorithm. It begins by introducing the aircraft roll control system and motivation for developing an adaptive controller to minimize errors from noisy analog sensor signals. It then provides the mathematical model of aircraft roll dynamics and describes modeling the real-time flight control system in MATLAB/Simulink. The document evaluates PID, fuzzy-PID, and PID-GA (genetic algorithm) controllers for aircraft roll control and finds that the PID-GA controller delivers the best performance.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
leewayhertz.com-AI in predictive maintenance Use cases technologies benefits ...alexjohnson7307
Predictive maintenance is a proactive approach that anticipates equipment failures before they happen. At the forefront of this innovative strategy is Artificial Intelligence (AI), which brings unprecedented precision and efficiency. AI in predictive maintenance is transforming industries by reducing downtime, minimizing costs, and enhancing productivity.
Dive into the realm of operating systems (OS) with Pravash Chandra Das, a seasoned Digital Forensic Analyst, as your guide. 🚀 This comprehensive presentation illuminates the core concepts, types, and evolution of OS, essential for understanding modern computing landscapes.
Beginning with the foundational definition, Das clarifies the pivotal role of OS as system software orchestrating hardware resources, software applications, and user interactions. Through succinct descriptions, he delineates the diverse types of OS, from single-user, single-task environments like early MS-DOS iterations, to multi-user, multi-tasking systems exemplified by modern Linux distributions.
Crucial components like the kernel and shell are dissected, highlighting their indispensable functions in resource management and user interface interaction. Das elucidates how the kernel acts as the central nervous system, orchestrating process scheduling, memory allocation, and device management. Meanwhile, the shell serves as the gateway for user commands, bridging the gap between human input and machine execution. 💻
The narrative then shifts to a captivating exploration of prominent desktop OSs, Windows, macOS, and Linux. Windows, with its globally ubiquitous presence and user-friendly interface, emerges as a cornerstone in personal computing history. macOS, lauded for its sleek design and seamless integration with Apple's ecosystem, stands as a beacon of stability and creativity. Linux, an open-source marvel, offers unparalleled flexibility and security, revolutionizing the computing landscape. 🖥️
Moving to the realm of mobile devices, Das unravels the dominance of Android and iOS. Android's open-source ethos fosters a vibrant ecosystem of customization and innovation, while iOS boasts a seamless user experience and robust security infrastructure. Meanwhile, discontinued platforms like Symbian and Palm OS evoke nostalgia for their pioneering roles in the smartphone revolution.
The journey concludes with a reflection on the ever-evolving landscape of OS, underscored by the emergence of real-time operating systems (RTOS) and the persistent quest for innovation and efficiency. As technology continues to shape our world, understanding the foundations and evolution of operating systems remains paramount. Join Pravash Chandra Das on this illuminating journey through the heart of computing. 🌟
This presentation provides valuable insights into effective cost-saving techniques on AWS. Learn how to optimize your AWS resources by rightsizing, increasing elasticity, picking the right storage class, and choosing the best pricing model. Additionally, discover essential governance mechanisms to ensure continuous cost efficiency. Whether you are new to AWS or an experienced user, this presentation provides clear and practical tips to help you reduce your cloud costs and get the most out of your budget.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Ocean lotus Threat actors project by John Sitima 2024 (1).pptxSitimaJohn
Ocean Lotus cyber threat actors represent a sophisticated, persistent, and politically motivated group that poses a significant risk to organizations and individuals in the Southeast Asian region. Their continuous evolution and adaptability underscore the need for robust cybersecurity measures and international cooperation to identify and mitigate the threats posed by such advanced persistent threat groups.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Driving Business Innovation: Latest Generative AI Advancements & Success Story
D0962227
1. IOSR Journal of Business and Management (IOSR-JBM)
e-ISSN: 2278-487X, p-ISSN: 2319-7668. Volume 9, Issue 6 (Mar. - Apr. 2013), PP 22-27
www.iosrjournals.org
www.iosrjournals.org 22 | Page
Forecast Ability of the Blume’s and Vasicek’s Technique:
Evidence from Bangladesh
Mokta Rani Sarker1
1
(School of Business, University of Information Technology & Sciences, Bangladesh)
Abstract: Estimation of forecasted beta is one of the most discussed issues both in finance literature and
empirical research. This paper deals with the theoretical and empirical issues of forecasted beta estimation.
Empirical study is focused on the forecast ability of different methods to estimate systematic risk and finally
hypothesis testing is done in order to find out is there any significance differences between these methods to
estimate future betas on the context of Bangladesh during a specified time period. This study was carried on
single stocks listed in the Dhaka Stock Exchange (DSE) instead of stock portfolio. It is concluded that there
exists no significance difference between Blume’s Technique and Vasicek’s Technique to estimate future betas
on the context of Bangladesh. And forecasted beta in Blume’s Technique and Vasicek’s Technique are
significantly different from actual beta. The findings of this paper will be useful for policy makers, all kinds of
investors, corporations and other financial market- participants.
Keywords- Beta, Portfolio, Blume’s Technique, Vasicek’s Technique, Dhaka Stock Exchange (DSE).
I. Introduction
The use of single index model calls for estimates of the beta of each stock that is a potential candidate
for inclusion in a portfolio. Estimates of future beta could be arrived at by estimating beta from past data and
using this historical beta as an estimate of the future beta. There is evidence that historical betas provide useful
information about future betas. Although the majority of the studies were carried out in developed countries,
only a limited number of studied were conducted in developing countries. The study attempts to forecast beta
using Blume‟s and Vasicek‟s Technique as well as their accuracy. And finally hypothesis testing is done in
order to find out is there any significance differences between these methods to estimate future betas on the
context of Bangladesh during a specified time period.
1.1 Problem Statement
Beta is a valuable instrument in finance for different purposes such as for stock valuation or portfolio
optimal composition, where only future values are relevant, the forecast of systematic risk becomes a significant
issue. Therefore, stationary characteristics of stock and portfolio betas turn to be a researchable question. The
problem statement for this research is to observe any pattern on stock‟s systematic risk in order to increase its
forecast ability.
1.2 Objectives of the Study
The study has been conducted to forecast beta using Blume‟s Technique and Vasicek‟s Technique. And
the study has been conducted on individual securities listed in Dhaka Stock Exchange (DSE). The objectives of
this study are:
a. Risk-return analysis of individual securities listed in DSE.
b. Forecast beta using Blume‟s Technique and Vasicek‟s Technique.
c. Comparing the forecasted beta with actual beta.
d. Find out which technique performs well to forecast beta in DSE as well as Bangladesh Stock Market.
e. Assist investors in portfolio selection process to make the right choice.
1.3 Structure of the Paper
The text is divided into six parts: Part One, „Introduction‟, introduces the importance of forecasting
beta. Hence, background of the problem was given briefly in this part; followed by Problem Statement,
Objectives of the Study of this research. Part Two, „Literature review‟, has been executed in three phases; it
discusses, firstly, overview of Dhaka Stock Exchange; secondly, Systematic Risk; thirdly, Adjusted Beta. Part
Three, „Methodology and Data‟, explains data source and methodology. Part Four, „Data Analysis and
Findings‟, discusses the results of the study. As the result of the study determined by Analysis of Variance
(ANOVA), the presenting of data in the findings part is considered easier to understand. Part Five,
„Conclusion‟, concludes the research result as well as the limitation of the research. Part Six, „References‟,
provide the lists of full bibliographical details and their journal titles.
2. Forecast Ability of the Blume’s and Vasicek’s Technique: Evidence from Bangladesh
www.iosrjournals.org 23 | Page
II. Literature Review
2.1 Overview of Dhaka Stock Exchange (DSE)
235 companies traded on DSE until June 2012. In 2010-2011, the volume of trade of listed securities
increased by manifold at the Dhaka Stock Exchange. In 2010-2011, a total of 1969 crore and 52 lakh securities
were traded on the Dhaka Stock Exchange, the value of which stands at Tk. 3 lakh 25 thousand 915 crore. On
the other hand, 1012 crore and 84 lakh securities were traded in 2009-10, the value of which was Tk. 256,349
crore. The number of trading days was 240 days in 2010-2011, which was 244 days in 2009-2010. The average
number of securities traded was 8.20 crore in 2010-2011 and average transaction was Tk. 1357 crore 98 lakh.
On the other hand, 4.15 crore securities were traded in 2009-2010 and average transaction was Tk. 1050 crore
and 61 lakh. DSE‟s all-share price index was 5160.05 points at the year ended on June 30, 2010 which lost
66.86 points and stood at 5093.19 points on June 30, 2011. The DSE‟s all-share price index stood highest
7383.94 points on December 5, 2010. Nine new companies were listed at the DSE during 2010-11 raising the
number of listed companies to 232.
In addition, The DSE‟s market capitalization to GDP ratio was 41.10 percent at the year ended on June
30, 2011. Collecting tax at source on share transaction from its member companies, the Dhaka Stock Exchange
deposited Tk. 325.91 crore in fiscal year 2010-2011 and Tk. 128.17 crore in fiscal year 2009-2010, to the
government exchequer.
2.2 Systematic Risk
The measurement and determination of risk have received considerable attention in recent years. One
measure of risk is systematic risk, defined as the risk inherent to the entire market or entire market segment. It is
also known as "un-diversifiable risk" or "market risk." Interest rates, recession and wars all represent sources of
systematic risk because they affect the entire market and cannot be avoided through diversification. Whereas
this type of risk affects a broad range of securities, unsystematic risk affects a very specific group of securities
or an individual security. Systematic risk can be mitigated only by being hedged. Even a portfolio of well-
diversified assets cannot escape all risk. Systematic risk is also defined in terms of the covariance of a security's
return with the return from the market portfolio. The relationship is often standardized by dividing the
covariance by the variance of return from the market portfolio. Hereafter, this measure of standardized
systematic risk shall be referred to as beta.
2.2 Adjusted Beta
To correct the tendency towards one, two main models were suggested in the literature: Blume‟s Model
and Vasicek‟s Model. Which model is preferable, if any, in forecasting betas? Murray (1995), Hawawini,
Michel and Corhay (1985), Luoma, Martikainen and Perttunen(1996) presented the evidence that adjusted betas
tend to outperform unadjusted betas. [3]
Gooding and O‟Malley (1977) who developed an empirical test on both adjusted and unadjusted betas
rejected beta stationary. They found that well-diversified portfolios of extreme betas are significantly non-
stationary. Therefore they concluded that in order to improve performance on beta forecasts; adjustments should
be made not only to take into consideration the regression tendencies but the market trends too. [4]
According to Blume (1971 and 1975), the systematic risk of stock portfolio tends to show relatively
stable characteristics. However, he observed a tendency of betas to converge towards the mean of all betas (1.0).
He corrected past betas by directly measuring this adjustment toward one and assuming that the adjustment in
one period is a good estimate of the adjustment in the next. It modifies the average level of level of betas for the
population of stocks. [6]
Vasicek (1973) has suggested the following scheme that incorporates these properties: If we let 1β equal
the average beta, across the sample of stocks, in the historical period, then the Vasicek procedure involves
taking a weighted average of 1β and the historic beta for security i. The weighting procedure adjusts
observations with large standard errors further toward the mean than it adjusts observations with small standard
errors. As Vasicek has shown, this is a Bayesian estimation technique. The estimate of the average future beta
will tend to be lower than the average beta in the sample of stocks over which betas are estimated. [7]
Klemkosky and Martin (1975) found that the Bayesian technique had a slight tendency to outperform
the Blume technique. However, the differences were small and the ordering of the techniques varied across
different periods of time. [8]
Elton, Gruber and Urich (1978) found some time periods where, with statistical significance, the blume
technique outperformed the vasicek technique on forecasting future betas. But the answer to which is the best,
should be a result of the goal for which betas are being computed. [9]
Emanuel (1980) concluded that for small portfolios their beta coefficients of one period were good
predictors of the corresponding betas in the subsequent period. [10]
3. Forecast Ability of the Blume’s and Vasicek’s Technique: Evidence from Bangladesh
www.iosrjournals.org 24 | Page
Dimson and Marsh (1983) investigated the stability of the beta of thin trading securities after using a
method designed to avoid thin trading bias. The findings of this study indicated that the stability of individual
securities betas was moderate; whereas portfolio betas were very stable (the portfolio beta stability was
examined by using the transition matrices method, while the present study utilizes the mean square error
technique). Also by employing two adjustment techniques (Blume and Vasicek) for the security beta
coefficients their results showed improvements in beta forecasts. [11]
Bera and Kannan (1986) tested the data and observed possible deviation from normality and concluded
that adjustment techniques proposed by Blume and Vasicek may not always be appropriate. [12]
Lally (1998) concluded that typical applications of Vasicek's method seem to mistakenly equate the
prior distribution with the cross-sectional distribution of estimated rather than true betas, that Blume's implicit
forecast of any tendency for true betas to regress towards one may not be desirable, that preliminary partitioning
of firms into industry type groups (as is typical for Vasicek) is desirable, and that conversion of OLS equity
betas to asset betas before applying the correction process is also desirable. [13]
Cloete, Jonah and Wet (2002) Showed that the idea of combining robust estimators with the Vasicek-
estimator yields a class of new estimators that performs well when compared to traditional estimators. [14]
Gray, Hall and Klease (2006) showed that Vasicek beta estimates are an unbiased estimate of the
subsequent period‟s OLS beta estimate, while OLS and Blume beta estimates are biased predictors. [15]
Sinha and Jayaraman (2012) observed that Bayesian techniques outperform classical methods in most
of the cases. Further, they observed that the Blume‟s technique helps to capture the over and under estimation in
the beta measure, this information can be utilized optimally to apply the Bayesian model under bilinear loss
function and improve the accuracy of the estimates. [16]
Based on the study‟s objectives and the literature review the following hypotheses can be formulated:
H1: There is no significance difference between actual beta and forecasted beta using Blume‟s
Technique.
H2: There is no significance difference between actual beta and forecasted beta using Vasicek‟s
Technique.
H3: There is no significance difference between the outcome of Blume‟s Technique and Vasicek‟s
Technique.
III. Methodology And Data
3.1 Data Source
This paper aims at forecasting beta using Blume‟s technique and Vasicek‟s technique as well as their
ability to forecast beta. For this purpose monthly closing price of the shares, dividend information and monthly
closing index value of the benchmark market index (DSE all share price index) have been used for the period
from January 2001 to December 2012. They were collected from Dhaka Stock Exchange. This study takes 101
companies listed in Dhaka Stock Exchange (DSE). The study has used secondary data because it pertains to
historical analysis of reported financial data. The collected data were consolidated as per study requirements.
Various statistical tools have been used to analyze data through Microsoft Excel software.
3.2 Methodology
This study is based on different techniques for better estimation of betas. Beta is simply a measure of
sensitivity of stock to market movement. Forecasting betas with accuracy is important because they affect the
inputs for the portfolio analysis. The calculation of beta for each stock is formally shown below:
(1)
Where i = Beta for individual security i; m
2
= Covariance between the return on individual security
i and the return on market; and im = Variance of the market return.
3.2.1 Blume’s Technique
Blume‟s analysis on the behavior of betas over time shows that there is a tendency of actual betas in the
forecast period to move closer to one than the estimated betas from historical data. Blume‟s technique attempts
to describe this tendency by correcting historical betas to adjust the betas towards one, assuming that adjustment
in one period is a good estimate in the next period. Consider betas for all stocks i in period 1, βi1 and betas for
the same stocks i in the successive period 2, βi2. The betas for period 2 are then regressed against the betas for
period 1 to obtain the following equation:
βi2 = b0 + b1βi1 (2)
The relationship implies that the beta in period 2 is b0 + b1 times the beta in the period 1.And use
equation (1) to forecast betas for period 3.
m
im
i
2
4. Forecast Ability of the Blume’s and Vasicek’s Technique: Evidence from Bangladesh
www.iosrjournals.org 25 | Page
3.2.2 Vasicek’s Technique
Vasicek‟s technique adjusts past betas towards the average beta by modifying each beta depending on
the sampling error about beta. When the sampling error is large, there is higher chance of larger difference from
the average beta. Therefore, lower weight will be given to betas with larger sampling error. The following
formula demonstrates this idea:
1
11
1
1
11
1
2
22
2
22
2
i
ii
i
i
(3)
Where βi2 = forecast of beta for stock i for period 2 (later period); 1 = average beta across the sample
of stocks in period 1 (earlier period); 1
2
= variance of the distribution of historical estimates of beta across the
sample of stocks; βi1 = estimate of beta for stock i in period 1; and 1
2
i = variance of the estimate of beta for
stock i in period 1.
IV. Data Analysis And Findings
For estimating beta, a sample size of 101 companies is selected from the securities listed on Dhaka
Stock Exchange (DSE). DSE all share price index is taken as the market index. Monthly closing price of these
securities is used for the period from January 2001 to December 2012. They are collected from DSE.
Table 1: Sector-wise Percentage of Data Coverage
Name of the Industry Total Number of
Companies
No. of
Companies
% of Data
Coverage
Bank 30 16 53.33%
Financial Institutions 22 3 13.64%
Engineering 23 16 69.57%
Food & Allied 16 5 31.25%
Fuel & Power 14 3 21.43%
Jute 3 0 0.00%
Textile 26 10 38.46%
Pharmaceuticals & Chemicals 20 13 65.00%
Paper & Printing 1 0 0.00%
Services & Real Estate 4 2 50.00%
Cement 6 4 66.67%
IT - Sector 6 0 0.00%
Tannery Industries 5 4 80.00%
Ceramic Industry 5 2 40.00%
Insurance 45 17 37.78%
Miscellaneous 9 6 66.67%
Total 235 101 42.98%
From the table 1 it can be seen that among 235 companies 101 companies are selected due to the reason
of the availability of data within the time frame (January 2001 to December 2012). It has covered 42.98% data
and it can be said that data coverage is moreover satisfactory to make a decision.
Hypothesis Testing 1: There is no significance difference between actual beta and forecasted beta using
Blume’s Technique.
For this purpose ANOVA or the “Analysis of Variance‟ procedure or “F-test” is used to test the
significance of the differences between actual beta and forecasted beta using Blume‟s Technique.
Table 2: Output of Hypothesis Testing 1
ANOVA
Source of Variation SS df MS F P-value F crit
Between Groups 0.808445 1 0.808445 17.25845 4.83E-05 3.888375
Within Groups 9.368687 200 0.046843
Total 10.17713 201
5. Forecast Ability of the Blume’s and Vasicek’s Technique: Evidence from Bangladesh
www.iosrjournals.org 26 | Page
Table 2 clearly explains the results of hypothesis testing. The .05 and .01 significance levels are the
most common, but other values, such as .02 and .10 are also used. In theory, we may select any values between
0 and 1 for the significance level. In this case 5% significance level is used. At the 0.05 significance level, the F-
critical value is 3.888375. The decision rule is to reject the null hypothesis if the computed value of F is greater
than 3.888375. From the ANOVA table it is found that the calculated F value is 17.25845. And the result is
statistically significant as it is significant at 0.000 level which is less than 0.05 or 5% level. That‟s mean the
forecasted beta in Blume‟s Technique is significantly different from actual beta. In fact, this p-value is less than
1%. There is a small likelihood that null hypothesis is true.
Hypothesis Testing 2: There is no significance difference between actual beta and forecasted beta using
Vasicek’s Technique.
For this purpose ANOVA or the “Analysis of Variance‟ procedure or “F-test” is used to test the
significance of the differences between actual beta and forecasted beta using Vasicek‟s Technique.
Table 3: Output of Hypothesis Testing 2
ANOVA
Source of Variation SS df MS F P-value F crit
Between Groups 0.808445 1 0.808445 16.40282 7.32E-05 3.888375
Within Groups 9.857395 200 0.049287
Total 10.66584 201
From Table 3 it is found that the calculated F value is 16.40282. In this case 5% significance level is
used. At the 0.05 significance level, the F-critical value is 3.888375. The decision rule is to reject the null
hypothesis if the computed value of F is greater than 3.888375. And the result is statistically significant as it is
significant at 0.000 level which is less than 0.05 or 5% level. That‟s mean the forecasted beta in Vasicek‟s
Technique is significantly different from actual beta. In fact, this p-value is less than 1%. There is a small
likelihood that null hypothesis is true.
Hypothesis Testing 3: There is no significance difference between the outcome of Blume’s Technique and
Vasicek’s Technique.
ANOVA or the “Analysis of Variance‟ procedure or “F-test” is used to test the significance of the
differences between the outcome of Blume‟s Technique and Vasicek‟s Technique.
Table 4: Output of Hypothesis Testing 3
ANOVA
Source of Variation SS df MS F P-value F crit
Between Groups 0 1 0 0 1 3.888375
Within Groups 0.846236 200 0.004231
Total 0.846236 201
From Table 4 it is found that the calculated F value is 0. In this case 5% significance level is used. At
the 0.05 significance level, the F-critical value is 3.888375. The decision rule is to reject the null hypothesis if
the computed value of F is greater than 3.888375. From the ANOVA table it is found that the p-value is 1.
That‟s mean there is no reason to reject the null hypothesis. So, there is no significance difference between the
outcome of Blume‟s Technique and Vasicek‟s Technique.
V. Conclusion
Forecasting betas with accuracy is important because they affect the inputs for the portfolio analysis.
The variance covariance matrix is based on the value of beta for each stock. There are basically two reasons for
estimating betas: The first is in order to forecast future betas. The second is to generate correlation coefficients
as inputs to the portfolio problem. Different techniques have been proposed for better estimation of betas.
This empirical study is focused on the forecast ability of different methods to estimate systematic risk
and finally hypothesis testing is done in order to find out is there any significance differences between these
methods to estimate future betas as well as their accuracy. For this purpose Blume‟s Technique and Vasicek‟s
Technique were applied by using the monthly closing prices of 101 companies listed in DSE and DSE all share
price index for the period from January 2001 to December 2012. It is concluded that there exists no significance
difference between Blume‟s Technique and Vasicek‟s Technique to estimate future betas on the context of
Bangladesh. Blume‟s Technique and Vasicek‟s Technique fails to forecast accurate beta due to the inefficient
scenario of Bangladesh stock market during the time frame taken for this study.
6. Forecast Ability of the Blume’s and Vasicek’s Technique: Evidence from Bangladesh
www.iosrjournals.org 27 | Page
5.1 Limitations of the Research
This paper attempts to forecast beta by using Blume‟s Technique and Vasicek‟s Technique and thereby
helps to make investment decisions. The current study however has some limitations. This study did not take
into consideration the companies that are not listed on the DSE and the companies that are listed and traded but
stopped operations. This study used monthly data rather than daily data. This study has successfully concluded
the forecast ability of these two techniques; future research may concentrate on forecasted beta estimation with
accuracy and the development of new adjusted beta techniques as well as their synthesis.
References
[1] L. Murray, An Examination of Beta Estimation Using Daily Irish Data, Journal of Business Finance and Accounting, 22, 1995,
893-906.
[2] G. Hawawini, P. Michel and A. Corhay, New Evidence on Beta Stationary and Forecast for Belgian Common Stocks, Journal of
Banking and Finance, 9, 4, December 1985, 553-560.
[3] M. Luoma, T. Martikainen and J. Perttunen, A Pseudo Criterion for Security Betas in the Finnish Stock Market, Applied Economics,
28, 1, January 1996, 65-69.
[4] A. E. Gooding and T. P. O‟Malley, Market Phase and the Stationary of Beta, Journal of Financial and Quantitative Analysis, 12, 5,
December 1977, 833-838.
[5] M. Blume, On The Assessment of Risk, The Journal Of Finance, March 1971, 1-10.
[6] M. Blume, Betas and Their Regression Tendencies, Journal of Finance, X, No. 3, June 1975, 785-795.
[7] O. Vasicek, A Note on Using Cross-Sectional Information in Bayesian Estimation of Security Betas, Journal of Finance, VIII, No.
5, Dec. 1973, 1233-1239.
[8] R.C. Klemkosky and J.D. Martin, The Adustment of Beta Forecasts, Journal of Finance, (September 1975), 1123 - 1128.
[9] E. J. Elton, M. J. Gruber and Urich, Are Betas Best?, Journal of Finance, 13, 5, December 1978, 1375-1384.
[10] D. M. Emanuel, The Market Model in New Zealand, Journal of Business Finance and Accounting, (Winter 1980), 591-601.
[11] E. Dimson and P. Marsh, The Stability of UK Risk Measures and the Problem of Thin Trading, Journal of Finance, June 1983, 753-
783.
[12] A.K. Bera and S. Kannan, An Adjustment Procedure for Predicting Systematic Risk, Journal of Applied Econometrics, 1(4), 1986,
317-332.
[13] M. Lally, An Examination of Blume and Vasicek Betas, The Financial Review, Vol. 33, Issue 3, 1998, 183-197.
[14] GS Cloete, PJ de Jonah and T de Wet, Combining Vasicek and Robust Estimators for forecasting systematic risk, Investment
Analysis Journal, No. 55, 2002, 37-44.
[15] S. Gray, J. Hall and D. Klease, Bias, stability and predictive ability in the measurement of systematic risk, UQ Business School,
2000, 1-40.
[16] P.Sinha and P.Jayaraman, Empirical analysis of the forecast error impact of classical and bayesian beta adjustment techniques,
MPRA, Paper No. 37662, March 2012.