SlideShare a Scribd company logo
1 of 88
Download to read offline
Department of Business Administration
Master's Program in Accounting & Master's Program in Finance
Master's Thesis in Business Administration III, 30 Credits, Spring 2020
Supervisor: Jörgen Hellström
Implied Volatility and
Historical Volatility
An Empirical Evidence About The
Content of Information And
Forecasting Power
Mohammad Aljaid & Mohammed Diaa Zakaria
II
ABSTRACT
This study examines whether the implied volatility index can provide further information
in forecasting volatility than historical volatility using GARCH family models. For this
purpose, this research has been conducted to forecast volatility in two main markets the
United States of America through its wildly used Standard and Poor’s 500 index and its
corresponding volatility index VIX and in Europe through its Euro Stoxx 50 and its
corresponding volatility index VSTOXX. To evaluate the in-sample content of
information, the conditional variance equations of GARCH (1,1) and EGARCH (1,1) are
supplemented by integrating implied volatility as an explanatory variable. The realized
volatility has been generated from daily squared returns and was employed as a proxy for
true volatility. To examine the out-of-sample forecast performance, one-day-ahead
rolling forecasts have been generated, and Mincer–Zarnowitz regression and
encompassing regression has been utilized. The predictive power of implied volatility has
been assessed based on Mean Square Error (MSE). Findings suggest that the integration
of implied volatility as an exogenous variable in the conditional variance of GARCH
models enhances the fitness of models and decreases volatility persistency. Furthermore,
the significance of the implied volatility coefficient suggests that implied volatility
includes pertinent information in illuminating the variation of the conditional variance.
Implied volatility is found to be a biased forecast of realized volatility. Empirical findings
of encompassing regression tests imply that the implied volatility index does not surpass
historical volatility in terms of forecasting future realized volatility.
Keywords: Implied Volatilty, Mincer–Zarnowitz regression, GARCH Model, Realized
Volatility, Predictive Power.
III
Table of Contents
1 Introduction............................................................................................................. 1
1.1 Background........................................................................................................ 1
1.2 GARCH and EGARCH Models ........................................................................ 4
1.3 Problematization ................................................................................................ 4
1.4 Knowledge Gap ................................................................................................. 5
1.5 Research Question ............................................................................................. 6
1.6 The Purpose of The Study ................................................................................. 6
1.7 Delimitations...................................................................................................... 7
1.8 Definitions of Terms.......................................................................................... 7
2 Theoretical Methodology........................................................................................ 9
2.1 Choice of Research Area ................................................................................... 9
2.2 Research Philosophy and Perspectives. ........................................................... 10
2.3 The Paradigm................................................................................................... 10
2.4 Ontological Assumptions................................................................................. 11
2.5 Epistemological Assumption ........................................................................... 11
2.6 Axiological Assumption .................................................................................. 12
2.7 Rhetorical Assumption .................................................................................... 13
2.8 Research approach and methodological assumption ....................................... 13
2.9 Research Method ............................................................................................. 14
2.10 Research design ............................................................................................... 15
2.11 Data Collection Method in Quantitative Research .......................................... 16
3 Theoretical Framework ........................................................................................ 18
3.1 Review of Prior Studies ................................................................................... 18
3.2 Efficient Market Hypothesis............................................................................ 20
3.3 The Random Walk Theory............................................................................... 22
3.4 No riskless arbitrage ........................................................................................ 23
3.5 Black and Scholes model................................................................................. 23
3.6 Choice of The Theory...................................................................................... 25
4 Fundamental Concepts of Volatility.................................................................... 27
4.1 Volatility .......................................................................................................... 27
4.2 Implied Volatility............................................................................................. 28
IV
4.3 Modelling Volatility ........................................................................................ 30
5 Practical Methodology.......................................................................................... 33
5.1 Research question and hypotheses................................................................... 33
5.2 Data sample...................................................................................................... 35
5.3 Input Parameters .............................................................................................. 38
5.4 Statistical Tests ................................................................................................ 39
5.5 Preliminary Data Analysis............................................................................... 44
5.6 Model Specifications ....................................................................................... 47
6 Empirical Testing and Findings........................................................................... 54
6.1 In sample performance..................................................................................... 54
6.2 Out-of-sample Volatility Forecasts Evaluation ............................................... 62
6.3 Minzer-Zarnowitz regression test .................................................................... 63
6.4 Encompassing Regression Test ....................................................................... 66
7 Conclusion.............................................................................................................. 69
7.1 Conclusion ....................................................................................................... 69
7.2 Further research ............................................................................................... 71
7.3 Implications ..................................................................................................... 71
7.4 Contributions ................................................................................................... 71
7.5 Ethical and Societal Considerations................................................................. 72
7.6 Quality Criteria ................................................................................................ 72
V
8 References .............................................................................................................. 75
List of Figures
Figure 1.1 Markets wake up with a jolt to the implications of COVID-19, (The
economist, 2020). ............................................................................................................. 1
Figure 4.1 Time plot of the VIX index of the European equity index, Jan 2005 to Dec
2019. ............................................................................................................................... 29
Figure 4.2 Time plot of the returns of OMX30 the stock market index for the
Stockholm, Jan 2005 to Dec 2019.................................................................................. 30
Figure 5.1 Time plots of stock indexes for each of the European stock market and S&P
500, Jan 2005 to Dec 2019. ............................................................................................ 35
Figure 5.2 Daily co-movement of S&P 500 index and its respective volatility index, i.e.,
VIX index, Jan 2005 to Dec 2019. ................................................................................. 36
Figure 5.3 Daily co-movement of Euro Stoxx 50 index and its respective volatility
index, i.e., VSTOXX index, Jan 2005 to Dec 2019. ...................................................... 37
Figure 5.4 Time-Series plots of returns for each S&P 500 Index and Euro STOXX 50
Index, Jan 2005 to Dec 2019. ......................................................................................... 46
Figure 6.1 Time-series plot of daily squared returns for the Eurozone Stock market, Jan
2005 to Dec 2019............................................................................................................ 63
Figure 6.2 Time-series plot of daily squared returns for the S&P 500 Stock market
index, Jan 2005 to Dec 2019. ......................................................................................... 63
List of Tables
Table 5.1 Descriptive statistics: January 2005- December 2019.................................... 45
Table 5.2 Different specifications of ARIMA (p,d,q) models Eurozone (Euro Stoxx 50)
........................................................................................................................................ 48
Table 5.3 Different specifications of ARIMA(p,d,q) models USA (S&P 500) ............. 49
Table 6.1 Estimation of GARCH, EGARCH and their corresponding VIX.................. 56
Table 6.2 Estimation of GARCH, EGARCH and their corresponding VIX.................. 59
Table 6.3 Descriptive statistics for realized volatility.................................................... 62
Table 6.4 Mincer- Zarowitz regression test.................................................................... 64
Table 6.5 Encompassing regression test......................................................................... 67
Table 7.1 Encompassing regression test result............................................................... 70
VI
[THIS PAGE WAS LEFT INTENTIONALLY BLANK]
1
1 Introduction
1.1 Background
For so long, the United States has been considered the financial hub of the world and the
most influential player in the global markets. As we are writing these words, the world is
facing a global health disaster resembled in the COVID-19 outbreak that originated in
China. The COVID-19 outbreak has cast its shadows on the global financial markets. The
news from Italy, which at the time has the highest cluster of infections outside Asia, has
led to an 8.92 % fall in S&P 500 index on February 28 (Clifford T, 2020). Amid this
global health crisis, markets are nervous, and investors are looking for a safe resort for
their investments. As the uncertainty levels increase, investors are trying to assess the
most affected assets by the shock. Declining copper prices is an influential indicator that
markets are slowing down. So far, it appears that the most affected stocks are of firms
that depend on remote supply chains like carmakers, airlines who had to cancel flights in
and out of China as well as many countries that have restricted airlines movement in an
effort to limit the virus spread (Economist,2020). Consequently, most interconnected
stocks with china have shown a sharp plunge, like oil firms. Meanwhile, gold prices have
reached its highest levels in seven years (Lewis, 2020), US dollar exchange rate has
declined as the federal reserve bank tries to save the stock market (Smialek & Tankersley,
2020), and the yield on ten-year Treasury bonds declined to reach 1.29% on February 27
(Smith, 2020).
Till the moment we do not know the full impact or repercussions of the COVID-19 crisis
on the global market, a global sense of unease and discomfort is prevailing among
investors expecting a further and deeper crack in the global economy caused by the virus
outbreak. Before this crisis, the markets were going well and SEB bank has forecasted a
soft landing and a reduced recession (SEB Bank, 2019, p 6). Now, the dominant thoughts
among investors are the vague structure of financial instruments that depend on low
volatility and the bloated credit market.
Figure 1.1 Markets wake up with a jolt to the implications of COVID-19, (The economist, 2020).
2
Also, volatility has increased significantly as shown in Figure 1.1 highlighting the
significance of fluctuations and time-varying financial uncertainty and their impacts on
other economic variables (Economist, 2020). Generally speaking, uncertainty is defined
as the volatility of random variables that is unexpectable from the market participants. It
is worth noting that uncertainty may give rise to negative impacts on employment rates,
investment, consumptions, as well as decisions of allocating funds (Jurado et al., 2015, p.
1177). In the same line, Kearney (2000, p. 31-32) argues that significant movements in
the stock returns are able to imply crucial effects on a firm’s financial decisions,
investment decisions, and other economic cycles. Furthermore, the importance of
studying volatility dates back to 1987 financial crash, scholars have paid a great attention
to volatility in terms of its causes and effects.
It is well acknowledged that stockbrokers particularly in the stock market, base their
investment decisions not only on domestic information from the local market but also on
information generated by global markets. This is due to the accelerating globalization of
financial markets caused by the comparatively free flow of commodities and funds
alongside upheaval in information technology (Koutmos & Booth, 1995, p. 747). Since
rational investment demands quick reaction and accurate evaluation of new information.
This implies that the market assessment is reasonable and logical and every stock will be
valued to gain rational returns consistent with its risk. This suggests that only anticipated
events will shift prices, shocks or unexpected events cannot follow a certain forecast
model. Thus, financial shocks are not correlated. According to the efficient market theory,
any new information arriving on the market should quickly be incorporated in the stock
prices. However, not all events are predictable. Once new information arrives in the stock
market prices, adjust rapidly to reflect the new information. The significance of new
information arriving in the stock market is assessed by their ability to change the
investor’s attitude toward risk and return (Birru & Figlewski, 2010, p. 1).
It is important to realize that due to the disastrous impacts of financial crises, which is
often represented in the collapse of assets market prices, decreases in real estate prices,
fall in stock prices, alongside a collapse in banking sectors, which had been precipitated
by increases in unemployment rates and recession in production (Reinhart & Rogoff,
2009, p. 466). Under these circumstances, financial traders and organizations seeking to
develop tools to help them to assess what risk they are exposed to, and to what extent
expected returns are proportioned with its risk. Perhaps the most important of these tools
is bond duration, which in practice is used to measure the sensitivity of the market value
of the bond toward the changes in market interest rates. More broadly, the portfolio
managers can immunize the value of his portfolio by matching the duration of his
portfolio with the desired time interval (Reilly & Sidhu, 1980, p. 58). Then, other tools
have emerged in an effort to manage risk including but not limited to the CAPM model,
the Black -Scholes for options pricing, stress testing, value-at-risk (VAR), RiskMetrics,
and CreditMetrics.
Grounding on the basis that evaluation models in practice measure the risk by the market
volatility, and the fluctuations in market volatility influence the expected returns on all
financial instruments. Accurate measuring of changes in the volatility may lead to a
coherence illustration of the variations in the return over time (Harvey & Whaley, 1992,
p.43). Observably, the volatilities embodied in options prices are often used to generate
information about the future market volatility. Broadly spoken, implied volatilities
resemble the expectations of the market traders about the market volatility in the short
3
term. It is worth noting that implied volatility is usually extracted by matching the
observed market options prices and the theoretical prices calculated according to the
Black-Sholes model (Goncalves & Guidolin, 2006, p. 1591). More importantly, implied
volatilities are adequately crucial in a manner that is commonly reported and disclosed in
financial news services, and many financial professionals closely follow these reports.
For all these together, the accurate estimation of implied volatility and the content
information of implied volatility has held great attention in the financial literature
(Corrado & Miller, 2005, p. 340).
In the background, the implied volatility that is extracted from the options price is
generally accepted as a device used to predict the future volatility of the underlying
financial assets over the lifetime of the option. More importantly, under the efficient
market hypothesis, the implied volatility should contain all the information incorporated
in all other variables, which include the historical volatility in explaining the future
variation in volatility (Yu et al., 2010, p.1; Christensen & Prabhala, 1998, p.126).
Consistently, Becker et al. (2006, p. 139) argue that the implied volatility should absorb
all relevant conditioning information in order to be a satisfactory tool to predict the future
volatility of underlying assets. Comparatively, Canina and Figlewski (1993, p. 659) argue
that the main reason behind the general acceptance of implied volatility as the best
predictor of future volatility simply due to being represents the best market expectation
of future volatility.
By the same token, Day and Lewis (1992, p. 267) considered the volatility that is
embodied in the options prices as a measure used to predict the average volatility of
underlying assets over the remaining life of the option. Furthermore, they argue that the
predictive power of implied volatility reflects the extent of the informational content of
options prices is subsumed in their volatilities. Moreover, Fleming (1998, p. 318) argues
that, given that all inputs except the volatility of the underlying asset for the option pricing
model, they are objectively determined. This follows that the implied volatility subsumes
all available information in the options prices only if the model of option pricing is
correct, and the market is efficient, i.e., all relevant information is collapsed in the option
prices. Hence, that implied volatility overcomes historical volatility in predicting the
future volatility of the underlying assets. In contrary, Pong et al. (2004, p. 2541-2542)
who start their argument by clarifying that future volatility of stock prices can be
forecasted either by studying the past behaviour of this price, i.e., historical information
about this price or by utilizing the implied volatility that can be reverted from the value
of the option written on this stock.
In recent years, Chicago Board Options Exchange (CBOE) in 1993 has launched the
CBOE Volatility index, i.e., VIX index, which was primarily constructed to reflect the
market’s prediction of future volatility over 30-days for at-the-money S&P 100 Index
option prices. The VIX index quickly turned into the principal barometer for US stock
market volatility. It is commonly displayed in the leading financial publications such as
Wall Street Journal, Barron’s in addition to financial news introduced on CNBC,
Bloomberg TV, and CNN/Money, given that VIX index is practically considered as “fear
gauge” (CBOE Volatility index, 2019, p. 3). Later, after ten years, the CBOE, on
September 22, 2003, has updated the structure design of the implied volatility index and
introduced an advance volatility index based on S&p 500 index, which is widely known
4
as VIX (Pati et al., 2018, p. 2553). As an illustration, the driving factor behind using
options on S&P 500 rather than S&P 100 has mainly come from the fact that S&P 500 is
the primary stock market index in the US not only for options market but also for hedging
(Fernandes et al., 2104, p. 1).
It is important to realize that the only principal difference between the VIX and stock
price index, such as (DJIA), is that the first index (VIX) measures the volatility while the
second measures the stock prices (Whaley, 2009, p 98). Furthermore, The VIX is
regularly used by investors and reflect their vision about future stock market volatility
(Whaley,2000, p. 12). Generally speaking, High VIX values normally indicate anxiety
and turmoil in the stock market, resulting in stock prices to decline to unprecedented
levels and thereby followed by sharp increases. On the other hand, low VIX values would
reflect the optimism within the market traders. Preparing the market for stability and
increasing the probability of market progress (Fernandes et al., 2104, p. 2).
1.2 GARCH and EGARCH Models
Volatility is measure of the risk when valuing a certain asset; hence, it is crucial to
measure volatility connected to asset prices. One of the most distinctive features of the
asset’s price volatility is that it is unobservable. Many models have been developed in
order to estimate and measure the volatility of asset price as well as providing an accurate
estimation for the future volatility of the asset price (Tsay, 2013, p. 176-177).
Following with Day and Lewis (1992, p. 268), One of the most famous models to forecast
future (expected) market volatility is the GARCH model, which was established on the
relationship between historical volatility of the market and the conditional or expected
market risk premium. The unique feature of a GARCH forecasting model that its
statistical technique that allows asset returns to change over time according to a
generalized autoregressive conditional heteroscedasticity. This feature makes the
GARCH model the most appealing model for the purpose of this study as the researchers
intend to examine the possibility of enhancing this forecasting model. However, the
GARCH model does not differentiate between positive and negative shocks in the asset’s
price volatility. The researchers consider using EGARCH model with the intention of
enhancing predictive power and comparing both models.
1.3 Problematization
Given the fact that in practice, traditional finance scholars have employed the past
behaviour of asset prices to develop models to help estimate future volatility. This implies
that forecasting future volatility is looking back by nature. In turn, the other method to
project the future is suggesting utilizing the implied volatility that is embodied in the
option prices. The driving factors behind this approach represent the common thought
that options prices are basically dependent on future expectations of volatility. Thus, the
expected volatility can be estimated by recovering the market option prices to the options
prices suggested by the Black-Scholes formula (Dumas et al., 1998, p. 2060). As per Liu
et al. (2013, p.861), the reason that implied volatility makes a better estimation for the
market uncertainty is that implied volatility includes historical volatility information as
well as the investor’s anticipation of future market circumstances reflected through
5
current option prices. It is widely believed that the implied volatility represents the best
proxy for future volatility of market’s options. Justifying its point view, that if the options
market is completely efficient, the implied volatility embodied in the option prices should
reflect the best forecast of future volatility (Bentes, 2017, p. 243).
On the contrary, the findings documented by Becker et al. (2006, p. 152) point out that
there is no significant relationship between implied volatility and future volatility
reported that implied volatility index does not contain further information more than other
variables. Alternatively, Christensen and Prabhala (1998, p. 125) concluded that implied
volatility surpasses historical volatility in forecasting future volatility. They added that,
implied volatility includes all past information, unlike volatility extracted based on
historical information. They adopted the following argument, if the options market is
efficient the implied volatility should have served as the best estimation of future
volatility. They were justifying the difference in their results from the previous studies by
using a longer time series of data and the changes in the financial system around the 1987
crash. More recent evidence, Pati et al. (2018, p. 2566) concluded that implied volatility
contains relevant information beyond that was suggested by the GARCH family models.
However, the volatility that is estimated based on past behaviour of the assets also
contains relevant information, and the implied volatility is considered biased forecasting
of future volatility.
Our problematization in this research stems from that there is a common thought
according to previous researches that implied volatility gives or produces a better future
forecast of the volatility of the underlying asset over the lifetime of the option (Yu et al.,
2010, p.1; Liu et al., 2013,p.861), which means that it is a short term forecast. On the
other hand, volatility forecasts based on historical behaviour can produce forecasts for
longer periods in the future compared to implied volatility at the expense of the accuracy
of the forecasts itself. In other words, we have two estimation approaches, first, implied
volatility can give accurate estimations of volatility, but for short term. Second, is
historical volatility can give us a longer estimation period but less accurate. The dilemma
we have at hand, can we derive or develop an estimation that can combine the advantages
of both approaches that can work as a hybrid model for forecasting volatility with a longer
period than the lifetime of the option and gives accurate estimation at the same time.
1.4 Knowledge Gap
The knowledge gap is basically streamed from the following arguments firstly, due to the
growing turmoil and shakiness of the stock market, volatility has attracted great attention
from researchers, the core issue of this debate centers around the predictive power of the
implied volatility, and the informational content of volatility (Bentes, 2017, p. 241).
Furthermore, the empirical evidence about this issue, i.e., the importance of implied
volatility in forecasting the future, as well as its ability to absorb all available information,
is still inconclusive and non-consensus. By looking back at the previous studies, for
example, Latane and Rendleman (1976), Fleming (1998), Christensen and Prabhala
(1998), Jiang and Tian (2005), Fung (2007), Chang and Tabak (2007), Pati et al. (2018)
who presented results that emphasized with various degrees that implied volatility is
better than historical volatility in forecasting future volatility. In contrast, other studies
such as Canina & Figlewski (1993), Pong et al. (2004), Becker et al. (2006), Becker et al.
(2007), Mohammad & Sakti (2018) documented results that confirm that historical
6
volatility is better than implied volatility in forecasting the future volatility, and contain
additional relevant information compared to the volatility that is extracted from option
prices. Clearly stated, the competing empirical explanations for the implied volatility
phenomenon, are still inconclusive and incomplete. Thus, the failure of literature to reach
a definite conclusion about the predictive power of implied volatility has motivated us to
conduct this study as we see that further research in this area of the field is still required
to settle confusion and contribute to consensus creation.
Secondly, we found that one of the basic limitations of implied volatility in forecasting
the future is that it is limited to use over the remaining life of the option, i.e., short-term
prediction. Thus, increasing the prediction time horizon using implied volatility will form
a considerable contribution to finance literature. Thirdly, in line with Pati et al. (2018, p.
2553) we argue that implied volatility that is produced by indexes were not much
researched in the literature. Consequently, examining the values of implied volatility
indexes in comparison to their corresponding stock market indexes, it will lead to new
contributions to finance theory and literature. Considering the difference in depth,
liquidity and regulatory structure between the two stock markets it will be interesting to
conduct a study using a new set of data of those stock market indices. This may lead to
support the reported results and increase the robustness of our study.
1.5 Research Question
All the above mentioned discussed led us to develop the following research question:
“Can implied volatility be used to improve in- and out-of-sample
performance of the GARCH (1,1) and EGARCH (1,1) models?
To answer this question we formulated sub-questions as follows:
• Does the implied volatility add more information when included as explanatory in the
conditional variance equation?
• Does the implied volatility have some information about the future realized volatility?
• Does the forecasting performance of implied volatility is unbiased?
• Does the forecasting performance of implied volatility is better than historical volatility
in explaining the future realized volatility (i.e., has the predictive power)?
• Does the volatility forecast based on implied volatility encompass all relevant information
contained in historical volatility?
1.6 The Purpose of The Study
This study seeks to address whether implied volatility includes further information than
historical volatility in forecasting stock price movements. And provide new insights into
the volatility of the underlying assets that are options written on. Also, we aim in this
study to investigate the predictive power of implied volatility by studying two main global
indices, namely S&P 500, which represents one of the most followed stock market indices
in the USA, and Euro Stoxx 50 which represent the stock market index in the Eurozone.
7
Additionally, we investigate the informational content of implied volatility in comparison
to historical volatility. For this purpose, we will examine whether there is biasedness or
not. By exploring the extent of the reaction of implied volatility to both bad and good
events in the stock markets. Furthermore, the empirical evidence helps in providing more
explanations about the implied volatility movements in countries such as Europe and the
US, which constitute the overwhelming majority of the financial transactions at the global
level.
For the purpose of assessing the predictive performance of forecasting models, we will
utilize the Mincer-Zarnowitz (MZ) regressions. Equally important, we aim to employ
several econometric models in such a way that it gives more robustness of documented
results. By all means, this study seeks to provide useful implications of portfolio
managers, risk managers, options traders by formulating trading strategies that produce
earnings by recognizing mispriced options, as well as academic researchers.
1.7 Delimitations
Here, there are some expected limitations that should be noted. First, we will use the data
of countries that have introduced the implied volatility indices, so we have limited our
search in these countries. Furthermore, the expected findings that would be extracted will
be only validated in these countries, or countries with similar economies. Secondly, more
importantly, there is a lack of theories in finance that constitute foundations for this type
of study. Thus, documented results will be practical implications in nature more than
theoretical.
1.8 Definitions of Terms
Implied volatility: The volatility that is embodied in option prices, which in usual contain
information about the market’s expectations about the future volatility of underlying
assets over the lifetime of the option. In practice, implied volatility is calculated by first
matching market options prices to Black-Scholes model and theoretical prices.
Thereafter, solving for the unknown volatility, based on the prices of underlying assets
and information of options contracts (Goncalves and Guidolin, 2006, p. 1591).
Implied volatility index VIX: It is like other equity market indexes (S&P 500), (DJIA),
(OMXS30), but with the difference that VIX measures the volatility while others measure
the prices. It was launched in 1993 for the sake of providing a benchmark of the expected
volatility of the underlying assets over the short term. More important, identifying which
options contract on volatility can be written. It is worth noting that VIX is considered
forward-looking, providing the market’s expectations about the volatility in the short term
(Whaley, 2009, p.98). High levels of VIX mean pessimistic expectations, give rise to
make the equity prices go down sharply. Whereas low levels of VIX reflect optimism
view causing the equity prices to go up (Fernandes et al., 2014, p. 2).
Historical volatility: Conceptually, in finance literature, is often used to indicate the
deviations from the mean value of all observations over time interval chosen Statistically
(Poon & Graner, 2003, p.480).
8
ARCH model: The name of the model stands for (autoregressive conditionally
heteroscedastic). It is obtained by arguing that given the stylized facts that are shown by
time series of return, such as “volatility clustering”. The general assumption of the
classical linear regression model (CLRM) that variance of the error term is constant does
not make sense for the time series. Hence, the model assumes that the variance of the
error term is non-constant (Brooks, 2014, p. 423, Engel, 1982, p.987).
GARCH model: It is considered extended of Engel’s work ARCH by allowing
conditional variance to follow the ARMA model (Enders, 2015, p.129). Hence, the
general idea behind the GARCH model is that the variance is a function not only for
lagged squared return but also is a function for lagged variance as well (Bollerslev, 1986,
p. 309).
9
2 Theoretical Methodology
This chapter is mainly concerned with the philosophical standpoint of
this research. This chapter will deliver the framework used to construct
and conduct this research in terms of research philosophy and
perspectives, and the paradigm used, the ontological and
epistemological stance, and sampling method. Furthermore, this
chapter includes motivation for each aspect the researchers have
adopted to present the validity of adopted knowledge stance and give a
complete picture of the methodological stance.
2.1 Choice of Research Area
The estimation and forecasting of future volatility of financial variables have an important
contribution to make to the finance theory, due to the growing role of volatility in the
various financial applications, such as pricing of financial derivatives, portfolio
management, risk management, hedging and asset allocation (Pati et al., 2018, p. 2552).
Accordingly, correct estimation of this factor is necessary to sound financial decisions
making. Broadly, academics and other financial market practitioners, such as investors,
speculators, market regulators, and policymakers, are traditionally building their
expectations about volatility based on the past behaviours of the financial variables. To
put it differently, using historical behaviours of financial instruments to explain the
expected variation and forecast the future changes. Despite the useful contribution of
previous approaches, they are by nature reverse looking to expect the future. On the other
hand, a second way to explore volatility is to estimate future volatility depending on
options prices on the understanding that options pricing basically depends on the expected
forward volatility. Accurate options prices mean accurate expected future volatility.
Consequently, accurate estimation of the future movements of the financial variables (i.e.,
the volatility of these variables) basically relies on accurate prices of the options that are
written on these financial variables (Dumas et al., 1998, p. 2059).
Furthermore, options traders in practice consider the implied volatility as the most
important input not only in determining the option price but also in determining the future
fluctuations of the underlying asset. Hence, promoting their ability to forecast the future
of the stock market (Mohammad & Sakti, p.431, 2018). All these together motivate us to
select the implied volatility as a preferable area of research. Further studying is needed
for exploring implied volatility for the sake of assessing its role in improving forecasts of
the stock market. More important, with an educational background in finance and
accounting, we find it both interesting and important to explore and examine the implied
volatility, realized volatility, modelling the forecasting volatility, and the content
information of implied volatility in the European countries and the USA. Contributing to
the ongoing discussion about the information efficiency regarding the implied volatility,
compared with realized volatility, as well as seeking to develop the concept of implied
volatility that could be implemented in the financial industry.
10
2.2 Research Philosophy and Perspectives.
Conducting research on a specific matter is an intricate matter, indeed. Since there would
never be one way or angle or perspectives for any subject of study, even there any unified
literature on how to define research (Collis & Hussey, 2013, p. 3). In everyday life, the
perception and philosophical view of the subjects and events, and its purposes and
adopted view would eventually dictate the way we will choose to provide a solution to
our problem. Hence, the philosophical and theoretical framework is essential for scientific
research and provides the founding stones and guidelines to undertake scientific research
(Collis & Hussey, 2013, p. 43), it provides the rationale behind the research as well as a
framework to interpret and illuminating the results of our research process as well as
understanding the context of social phenomenon (Bryman, 2012, p 19).
2.3 The Paradigm
Paradigm is a set of criteria that guidelines how research should be undertaken (Collis &
Hussey, 2013, p. 43), according to Saunders et al. ( 2012, p723) it is “A set of basic and
taken-for-granted assumptions which underwrite the frame of reference, mode of
theorizing and ways of working in which a group operates.” Likewise, Bryman (2003, p
4) has defined paradigm as the set of concepts and conventions that guides or dictates
what should be studied, how research should be carried out, and how to illustrate the
research findings. Hence, the philosophical assumptions of the research will forge the
research strategy and data collection methods and data processing (Saunders et al., 2012,
p104). The researcher should be able to argue for the chosen philosophical standpoint and
motivate his choice of certain points of view over other philosophical stances (Saunders
et al., 2012, p104).
Generally, we have two main paradigms interpretivism and positivism. Picturing a
positivist researcher, is like looking at a biology or experimental physics scientist in his
lab conducting an experiment. Positivist researcher tries to examine observable and
measurable factors under controlled environment circumstances, describe and measure
the effect of a distinctive changed factor upon the whole phenomenon, and record his
feedback. This is natural due to the fact that this approach has been developed within the
laboratory passages to serve to understand a natural phenomenon. Positivism perceives
the world governed by cause and effect in a mechanical relationship between its
components (Saunders et al., 2012, p105). Knowledge originates from experiments to test
an existing theory with results that can be scientifically verified (Collis & Hussey, 2013,
p. 44). The main trait of this paradigm is that it perceives the social reality as “objective
and singular” and examining this reality will not affect this reality (Collis & Hussey,
2013, p. 43).
On the other hand, with the rise of social sciences and dealing with variables like culture,
language, ethnic orientations and other human characteristics like values, customs,
traditions, and religion, the perception of social phenomenon reality is not singular, it is
merely subjective since social reality is the perception formed in our minds about an event
(Collis & Hussey, 2013, p. 43). For example, the same word, gesture, or behaviour could
be considered friendly or hostile depending on the social circumstances. Scientists
realized that social behaviour could not be studied in isolation from the society and
11
needed to be studied in its social context and understanding the morale behind this
behaviour is essential (Saunders et al., 2012, p107). Positivism could not serve well in
this matter as those social and cultural values are relative; it is almost impossible to have
a consensus over a specific perception of a social phenomenon. Positivism was deemed
inadequate, and interpretivism had emerged to investigate social and behavioural
phenomena and integrate the geographical, social, phycological, and cultural aspects.
Researcher’s opinions in scientific research became a key factor in interpreting the social
phenomenon (Collis & Hussey, 2013, p. 45).
In this study, our research question is, “Can implied volatility be used to improve in- and
out-of-sample performance of the GARCH (1,1) and EGARCH (1,1) models?” we choose
to undertake our research under the positivism approach. Since we are seeking an
objective view of the social reality and would enable us to test previous theories to gain
further evidence and provide more extensive analysis and scientifically verify previous
conflicting researches shreds of evidence, as well as our developed hypothesis. We think
that this approach is adequate for our research. Meanwhile, the interpretivism approach
is much less appropriate to process such quantitative research and would distort the
objectivity of this research.
2.4 Ontological Assumptions
Ontology is mainly concerned with the nature of social reality relative to social entities
(Bryman, 2012, p 32). This issue is important to present the researcher’s position
compared to social reality and social actors. An objective ontological stance would realize
the social reality as external to social entities, and couldn’t be influenced by actors, and
represent a restraint to them and it exists regardless of actors acknowledge it or not (Collis
& Hussey, 2013, p. 49).
On the other side, social entities or actors are the building blocks of interpretivism
essential (Saunders et al., 2012, p106). Social reality could be influenced by social
entities. Hence, social reality is forged and described according to the respective
perception of the social entities to each other (Bryman, 2012, p 32). This results in a more
subjective view of social reality.
In this research, objectivity is more appropriate to use since we intend to conduct an
empirical test to scientifically validate previous evidence and test our developed
hypothesis. We need to acquire a further comprehension of implied volatility and its
relation to change in relation to its underlying assets. We believe that in order to clarify
the nature of the relation and behaviour between factors and give our verdict upon the
research question, objectivity is necessary. Meanwhile, interpretivism would induce
distortions in social reality. Hence, it could trigger flows and limitations in our research
process.
2.5 Epistemological Assumption
Epistemology not only refers to the know-how to acquire knowledge but also it forms a
yardstick that we can use to substantiate this knowledge and create a science (Collis &
12
Hussey, 2013, p. 47). Similarly, according to Lexico Dictionary (www.lexico.com) by
oxford, “Epistemology is the theory of knowledge, especially with regard to its methods,
validity, and scope as it distinguishes justified belief from opinion.”. In other words, we
are talking about the accredited tools that we can use to earn information.
For positivist researchers in order to acknowledge the information, it has to be observable,
measurable, and verifiable via appropriate empirical tools, Also, the independence of the
researcher plays a vital role in maintaining the objectivity of the research (Collis &
Hussey, 2013, p. 47).
Contrariwise, since the interpretive researcher realizes social reality as subjective, and he
interferes in formulating it, the involvement and opinions of the researcher are important
to provide his understanding of social reality. Hence, the interpretive researcher is not
independent, he also acquires evidence from participants in social reality (Collis &
Hussey, 2013, p. 47).
In our research, we intend to use a more positivist epistemological approach. We believe
that this approach would be appropriate for our empirical testing. Generalizability is a big
advantage of the positivist approach, and we are ambitious that our research findings
would be useful and generalizable. Needless to say, that our independence is important in
this research so, using interpretivism as a general approach would be inadequate.
2.6 Axiological Assumption
Axiology refers to the role of the researcher’s values and ethics in the research process
(Saunders et al., 2007, p 134). Since our values and ethics vastly contribute not only our
view of them but also our vision for right or wrong and eventually, our moral compass
that guides our behaviour and actions. The researcher must be aware of the impact of his
ethics and values on his research and decide whether a positive or negative effect on his
research (Saunders et al., 2007, p 134).
Due to that, the interpretive researcher considers the social reality is independent and
beyond reach, and he can’t influence it (Collis & Hussey, 2013, p. 49). Positivist
researchers often have more proclivity towards neutralising the effect of their values on
the research (Collis & Hussey, 2013, p. 48).
In contrast, interpretive researchers acknowledge that they have an axiological stance,
and research couldn’t be value-free (Bryman, A,2016, p 39). Furthermore, he sees that
the involvement of values and morals are essential to provide a meaningful explanation
for the research findings (Collis & Hussey, 2013, p. 49). Those values and morals could
be for both the researcher as well as the research subjects (Bryman,2012, p 40).
In this research, the researchers are concerned with studying the relationships between
the change options price and relative to the change in the underlying assets. We see
ourselves as independent from the subject of study and keen to present an objective view
of our research findings, we consider our research is quasi value-free. Hence, we will
have more of an interpretive approach overall our research.
13
2.7 Rhetorical Assumption
Following Collis & Hussey (2013, p. 48). Language serves as a medium of exchange to
communicate knowledge. Hence, it is important to consider the language used in an
academic paper as a complementary part of the whole research paradigm. As we
mentioned, positivists see themselves independent from the social reality and the research
subject and seeking more objectivity and less bias. This has made it more logical to use a
passive voice presenting the ideas and explaining the findings.
On the contrary, since interpretivist can involve his ideas and beliefs and moral stances,
the interpretivists can use an active voice to express their ideas and also incorporate their
values to explain the research findings (Collis & Hussey, 2013, p. 48).
Considering the above, we believe that the positivist approach is more optimal to use in
our research. It will help us produce more measurable and generalizable results.
2.8 Research approach and methodological assumption
Every researcher needs to use theory in his research. Hence, the question about the
research approach is a question about the purpose of the research relative to the used
theory. This is an important aspect as the researcher needs to be aware of whether he will
build a theory or test a theory as this would reflect on the research design (Saunders et
al., 2012, p 134). Also, the research approach helps the researcher to make an enlightened
decision in terms of research strategy, a distinct comprehension for the study findings,
and a sound understanding of the sort of conclusions of the research (Sekaran & Bougie,
2016, p 30). This gives the researcher to choose between the inductive, deductive,
abductive approach to conduct his research.
Inductive research occurs when a researcher wonders about the reason behind the
occurrence of his observation (Sekaran & Bougie, 2016, p 26). Usually, this approach is
not based on previously developed theories (Saunders et al., 2016, p 52), and it is usually
consistent with generating a theory or evolving the theory to get a richer theoretical
perspective to answer his questions (Sekaran & Bougie, 2016, p 26). In the inductive
approach or qualitative research, the researcher studies the participant’s morals and the
liaison between them (Saunders et al., 2016, p 168). Data collection under the inductive
approach doesn’t have a single standard and could be changed during the research as it is
an interactive and natural process (Saunders et al., 2016, p 168). Since the inductive
approach is an exploratory approach, it may use interviews, either structured or semi-
structured to gather information (Collis & Hussey, 2013, p. 4).
On the other hand, the deductive approach is “Scientific research pursues a step‐by‐step,
logical, organized, and rigorous method (a scientific method) to find a solution to a
problem” (Sekaran & Bougie, 2016, p 23). Hence, when the researcher embraces an
obvious theoretical stance and begins the process of collecting and analyzing data with
the intention to test this theory (Saunders et al., 2016, p 52). Furthermore, a deductive
strategy is more often coherent with a quantitative research approach (Bryman,2016, p
32). The deductive approach usually tries to illustrate the cause and effect relationship
between concepts and research objects (Saunders et al., 2016, p 146). Unlike the inductive
14
approach, testing theories using a deductive approach needs gathering a huge amount of
data, processing this data needs an advanced structured methodology object (Saunders et
al., 2016, p 150).
Between testing theory with deduction and generating theory with induction lies the
abduction research strategy and it is “A form of reasoning with strong ties with induction
that grounds social scientific accounts of social worlds the perspectives and meanings of
participants in those social worlds” (Bryman,2016, p 709). The abduction came in order
to act as a mediator between the previous research approaches is the Abductive research
strategy. Likewise, researchers gather data to excavate social phenomenon, describe main
themes, and illuminate patterns, to build new or reshape existing theory then, testing this
newly reshaped theory (Saunders et al., 2016, p 145).
In our research, we will use an abductive approach as it would enable us to customize our
data collection method research design as well as data processing. This will make us able
to have a crafted research strategy that is tailored to fit our research question regarding
the proportionate change implied volatility and its relation to the change underlying asset
price. We believe that the abductive approach would give us an agile research strategy to
undertake our research. The ability to form a special compounded research strategy
combining quantitative and qualitative techniques when needed, would empower us to
shed more light on the conflict of evidence we have.
2.9 Research Method
There are two main categories to choose from when conducting research quantitative and
qualitative methods of research design (Saunders et al., 2016, p 164). The difference
between the Quantitative and qualitative research methods emerges from the kind of data
itself (Neuman, 2014, p 167), quantitative research usually distinguished by collecting
and processing data numerical data meanwhile, qualitative uses nonnumeric data
(Johnson & Christensen, 2014, p 82). However, practically speaking, business researches
often needs to combine both research methods (Saunders et al., 2016, p 165).
Positivism is mostly associated with quantitative approach (Saunders et al., 2016, p 167).
Furthermore, the quantitative approach is mostly connected to the deductive approach,
which is usually used when the purpose of the research is testing theory. Meanwhile, the
qualitative approach is often associated with interpretivism (Saunders et al., 2009, p. 52).
The qualitative approach is often used to gain a deeper understanding of the theory itself
or to investigate certain phenomenon (Johnson & Christensen, 2014, p 82).
In our research, we believe that the quantitative approach will be the main approach in
our research. This approach will enable us to use statistical and mathematical models in
order to test our formulated hypothesis of the applicability of extending the predictive
power of implied volatility over periods that exceeds the lifetime of the options itself.
15
2.10 Research design
Research design is often described as the way or the road map that the researcher would
follow to answer his research question (Saunders et al., 2016, p 145). Similarly, “is the
plan or strategy you will use to investigate your research” (Johnson & Christensen, 2014,
p 182). As per Collis & Hussey (2013, p 97), the importance of choosing a research design
lies in illuminating the way to the researcher to draw a comprehensive plan that is used
as a guidance in answering the research question with the best possible way. And since
research approaches are quite contrasting, this could result in “miscommunication and
misunderstandings” (Neuman, 2014, p 167), if the research design weren’t accurately
chosen. Furthermore, the kind of research relies on the intended goal of the research,
which can be separated into four types: descriptive, explanatory, exploratory, and
predictive (Collis & Hussey, 2013, p 3).
The main goal for descriptive research is to depict a detailed narration of the status of the
components of the social phenomenon (Johnson & Christensen, 2014, p 547).
Furthermore, Descriptive research provides a description of the characteristics of a
situation, social phenomenon, or relationship (Neuman, 2014, p 167). The main target of
descriptive research is not explaining the causal relations between variables but to portray
the variables and illustrate the relationships between those variables (Johnson &
Christensen, 2014, p 547). Usually, the outcome of this kind of study is an overall
depiction of the social phenomenon (Neuman, 2014, p 39).
After the descriptive research comes the explanatory research design or analytical
research. It is thought this kind of research serves as an extension for descriptive research
(Collis & Hussey, 2013, p 5). Also, Johnson & Christensen (2014, p 547) define
explanatory research as “Testing hypotheses and theories that explain how and why a
phenomenon operates as it does.” Hence, as it appears from the definition that the
researchers, according to this design not only keen to describe the social phenomenon as
it is but also keen to understand and analyse the relationships between its variables,
furthermore, studying the mechanism upon which the social phenomenon happens and
testing their formulated hypothesis (Neuman, 2014, p 40).
Thirdly, comes the exploratory research, as the name speaks for itself, this type of
research is to shed light upon a social phenomenon or a certain case that we have little
knowledge on and begins investigating this phenomenon (Johnson & Christensen, 2014,
p 582). In other words, it’s the kind of research that investigates less understood subjects
and aims to produce an initial idea about it (Neuman, 2014, p 38). Thus, the exploratory
approach provides a useful way and wonder about the phenomena around us and acquire
new knowledge (Saunders et al., 2016, p 174). Furthermore, it provides the researcher
with more malleability and flexibility in the research process (Saunders et al., 2016, p
175).
The last type of research design is the predictive research, and it refers to “a research
focused on predicting the future status of one or more dependent variables based on one
or more independent variables” (Johnson & Christensen, 2014, p 870). The main aim of
this type of research is to generate a generalization based on a previously formulated
16
hypothesis that could give the ability to predict the future outcome or behaviour of a
certain phenomenon (Collis & Hussey, 2013, p 5).
In our study, we consider our research as exploratory research. Although there could be
many pieces of literature about implied volatility, and it has been proven that it provides
a superior result when compared to historical volatility. However, implied volatility
measures are only valid during the options life. Consequently, there is so much little we
know about the long-term predictive power of it. Hence, we would use quantitative data
to investigate and test our formulated hypothesis. Our aim is to provide a piece of
empirical evidence on the predictive power of implied volatility in the long term.
2.11 Data Collection Method in Quantitative Research
Under the positivist paradigm with a quantitative approach, the first concern is collecting
a representative sample that captures exemplify the features in concern of the desired
population (Neuman, 2014, p 38). Since it is not possible to study every individual
information available in the population, we will have to take a sample.
Sample method
When sampling for quantitative research, the researcher needs to be aware of the kind of
sample that he chooses for his research. There is more than one type of sampling
representative sampling and biased sampling (Johnson & Christensen, 2014, p 344). In
other words, the researcher has two methods to use in order to collect data, probability or
representative sampling, and non-probability sampling (Saunders et al., 2016, p 275).
A representative sample is a sample that captures all the features and characteristics of
the original population but smaller in size (Johnson, B & Christensen. L, 2014, p 344).
Similarly, it is defined as “the chance, or probability, of each case being selected from
the population is known and is usually equal for all cases” (Saunders et al., 2009, p. 213).
Hence, a representative sample is more and achieve high precision, and efficient to use in
quantitative research, also representative sampling is considered to be highly cost-
effective relative to the efficiency level it provides (Neuman, 2014, p 247- 248).
Alternatively, a biased sample is a sample chosen by the researcher that has a common
criterion of selection that makes it consistently different from the original population
(Johnson & Christensen, 2014, p 344). Furthermore, nonprobability sampling techniques
are considered less demanding compared to probability sampling in terms of the
mathematical processing of data (Neuman, 2014, p 248). There are three sub-techniques
to reach a nonprobability sampling convenience sampling, snowball sampling, and quota
sampling. Convenience sampling is “A non-random sample in which the researcher
selects anyone he or she happens to come across.” (Neuman, 2014, p 248). Hence, the
selection criteria are accessibility, readiness, and effortless available individuals of the
sample (Neuman, 2014, p 248). However, this method is not suitable for all researches as
it may generate a very distorted picture of the population (Neuman, 2014, p 248).
17
In contrast, quota sampling is “A non-random sample in which the researcher first
identifies general categories into which cases or people will be placed and then selects
cases to reach a predetermined number in each category” (Neuman, 2014, p 249). Thus,
it’s a designed selection that aims to formulate a more representative picture of the
population based on targeting a specific criterion in each group that would eventually
resemble the population studied (Johnson & Christensen, 2014, p 363). Although it might
seem an accurate method representing all the possible categories in the population seems
to be hard to achieve sometimes (Neuman, 2014, p 249)
Furthermore, snowball sampling is a method was the researcher asks the participants to
refer to individuals from their social network who have a certain criterion the researchers
are interested in and who are willing to participate in the study (Johnson & Christensen,
2014, p 365). This method depends on the exponential effect of the human social network
to acquire more participants at the beginning then getting bigger by time (Neuman, 2014,
p 275).
Due to the fact that we will conduct our research on testing the formulated research
question, “Can implied volatility be used to improve in- and out-of-sample performance
of the GARCH (1,1) and EGARCH (1,1) models?”. Since quantitative research has the
ability to employ a single data collection technique, we choose to base our research on a
15 years sample of data gathered from Eikon According to our research we will draw the
samples based on the most popular stocks that we can consider as a representative for the
studied markets. Hence, we chose two exchange market indexes of standard and poor’s
500 for the USA and Euro Stoxx 50 for Europe. We think that the chosen time interval of
15 years will provide us with enough data to conduct our research. The researchers think
that this time span, including the rise and fall of the international financial crisis and its
repercussions, would enable us to validate our findings.
Literature and source criticism
To conduct this research, the researchers only relied on appropriate literature and theories
that have a sound basis. We have chosen Umeå University library to be the main source
of data. Besides, we have used Google scholar. To make sure of the accuracy and
efficiency of the data collected, we have used Eikon to extract our data.
Researchers have used articles from the most respectable journals to depend on or to refer
to in our research. We have chosen the most relevant and most useful resources available
that we could find on implied volatility, the cost of volatility, volatility as a fear gauge,
the predictive power of volatility, and mathematical models that we have used. We chose
to use GARCH and Black-Scholes models variants and MZ in calculating and processing
data as it was the most appropriate model we could find. And can provide our research
with a sound basis we can build on. The researchers were keen to provide the most
adequate keywords and definitions and terminologies that promote cohesion to the
research as well as homogeneity. Citations and references were applied according to
Umeå business school thesis manual.
18
3 Theoretical Framework
The main goal of this part is to present a review of previous literature
about the subject of the research, as well as theories that are in line
with our study, coupled with the selected theory part. The choice of
theory part involves our motivations arguments behind our choice, as
well as the driving factors that support our viewpoint about the
theoretical foundation of our study. In other words, the theory is
selected in such a way that tailored to our expected findings.
3.1 Review of Prior Studies
In the financial literature, there is a common thought amongst researchers that there is a
negative relationship between the implied volatility index and stock returns. Also,
researchers have found that volatility of stock markets respond differently or
disproportionately relative to the direction of the change in stock returns, they found that
volatility is more sensitive to adverse return shocks than positive return shocks (Shaikh
& Padhi, 2016, p. 28). However, the relationship between implied volatility and realized
volatility in terms of informational content and forecasting future is not as clear yet. There
is conflicting empirical evidence on the role of implied volatility in forecasting future
volatility and its informational content. In the following lines, we present a summary of
selected articles that studied those concepts.
Latane and Rendleman (1976) suggested that the volatility embodied in the option prices
can be calculated based on the assumption that all investors and options traders behave
according to the Black-Scholes model. In other words, they assume that the implied
volatility is equal to the actual volatility of the underlying stock. They were motivating
their assumption by illustrating that the ability to estimate implied volatility correctly
relies on a reasonable estimation of the impact of dividend payments, transaction costs,
time differences, taxes, etcetera. They were suggesting a methodology based on utilizing
the weighted average of implied standard deviations as a tool to measure the future
volatility of stock returns. The reported findings confirm that standard deviations of
market prices differ from these suggested by the Black and Scholes model. Hence, the
prices suggested by the model does not capture the real factors that determine the
decisions of options traders. More importantly, they suggested that implied standard
deviations are superior in forecasting volatility than those that derived depending on
historical volatility.
Similarly, Harvey and Whaley (1992) investigated the dynamic behaviour of implied
volatility to evaluate market options. They have built their study on implied volatility on
the S&P 100 options index. The documented findings indicate that the options market is
efficient. They reported that the changes in future volatility could be forecasted using
implied volatility. Alternatively, Day and Lewis (1992) examined the informational
content of implied volatility that is derived from call options on S&P500 index using
GARCH and EGARCH models. They have added the implied volatility as an explanatory
19
variable to the conditional variance equation to capture the informational content of the
implied volatility. However, the documented results showed conflicting evidence, that
neither implied volatility nor the conditional volatility of GARCH and EGARCH could
capture the realized volatility completely.
Moreover, Canina and Figlewski (1993) have encountered the previous researches. They
have analysed over 17000 OEX call options prices over S&P 100 and provided a
surprising result that sharply conflicts with the traditional perception. They reported that
implied volatility has very little informational content about the future realized volatility.
They also refuted the assumption of irrationality of options trader’s decisions; however,
they could not eliminate irrationality completely, and considered traders to be efficient.
Consistently, Lamoureux and Lastrapes (1993) explored the behaviour of implied
volatility compared to the volatility of the underlying assets. They rejected the joined
hypothesis that states that the options markets are efficient and options pricing models
are work correctly. On the other hand, Jorion (1995), has presented a supportive findings
when he found that implied volatility had more informational content than historical
volatility. He investigated the predictive power and the informational content of implied
volatility derived upon options on foreign currency from the Chicago Mercantile
Exchange. However, implied volatility was found to be prejudiced when it comes to
forecasting future volatility.
In the same fashion, Xu and Taylor (1995) compared the predictive power for the implied
volatility and historical volatility predictions using a sample of four exchange rate spans
from 1985 to 1991. They have shown that volatility forecasting extracted based on
implied volatility has more relative information than to historical volatility. However, the
documented results also showed the failure to reject the hypothesis that past returns have
no relative information besides the information provided by options prices. Furthermore,
Christensen and Prabhala (1998), supported the findings provided by Xu and Taylor
(1995), by confirming that implied volatility subsumes all available information in
options markets. They reported that the implied volatility dominates the historical
volatility as a forecasting tool for future market volatility. They were enhancing their
results by employing larger sample data, justifying the deviations of previous studies by
the regime shift around 1987.
Fleming (1998), investigated the forecasting performance provided by S&P 100 implied
volatility. He has found that despite the increasing trend in biasedness of implied volatility
forecasting compared to realized volatility, the implied volatility forecasting subsumes
valuable information relative to realized volatility. He suggested that a linear model that
can correct the biasedness of implied volatility would be a useful tool in forecasting
market expectations of future volatility. Contrary, Pong et al. (2004) provided evidence
quite different from previous studies suggesting that historical volatility is more accurate
from implied volatility in forecasting future market volatility. Together with that, the
informational content of historical volatility in forecasting the future volatility surpasses
its corresponding derived from implied volatility.
Beker et al. (2006) challenged Fleming (1998) findings. They examined the ability of
implied volatility to incorporate all available information based on the VIX index. Their
hypothesis was, that implied volatility that extracted conditionally on the options prices
20
should not only reflect available published information, but also represents the best
market expectations about the future volatility of underlying assets. They concluded that
implied volatility has a reduced efficiency regarding all factors to be used in volatility
forecasting. They have shown that although there is a positive correlation between
implied volatility index and future volatility of the underlying asset, implied volatility
does not improve the forecasting performance of future volatility forecasts. Becker et al.
(2007), continued their research to obtain a better comprehension of the informational
content of implied volatility. They investigated the VIX index and its relevance to
forecasting models, and if VIX index would have more information than the historical
volatility. Their research showed that implied volatility does not include any further
information than historical volatility.
Yu et al. (2010), they have investigated the predictive power of volatility embodied in
options that are traded in both OTC and exchange. The documented findings were
indicating that the predictive power of implied volatility is superior to the historical
volatility regardless of the options were traded in OTC or exchanges. Adversely, Bentes
(2017) provided a contradicting evidence in his study about the relationship between
implied volatility index and realized volatility. He used monthly data from the BRIC
countries. He sought to explore the informational content of implied volatility and its role
in explaining the realized volatility. More importantly, He has utilized useful statistical
methods such as Autoregressive distributed lag (ADL) and correction error (EC) and
paralleled the extracted results with ones that employ the OLS regression methods. The
results evidenced that implied volatility is not efficient for any of BRIC countries, albeit
it was unbiased for India.
Recently, Mohammad and Sakti (2018), they have investigated the informational content
of volatility embodied in the call options in the Malaysian stock market. They used daily
data for 100 trading days between 2013 and 2014. The documented findings indicate that
implied volatility does not contain relevant information about the future market volatility
of underlying assets. Furthermore, the forecasting based on implied volatility is less
accurate in relative to predictions by historical volatility. Alternatively, Pati et al. (2018)
had a piece of conflicting evidence, they studied the informational content of implied
volatility index relative to their corresponding stock market indexes for three Asian
countries, i.e., India, Australia, and Hong Kong. The documented findings showed that
implied volatility is biased in forecasting the future stock market volatility. However, it
has contained relevant information that can justify the future realized volatility.
3.2 Efficient Market Hypothesis
The efficient market hypothesis had emerged in the mid-60s through the efforts of the
Nobel prize winner Paul Samuelson when he promoted Bachelier’s work among other
economists with empirical its studies (MacKinlay, & Lo, 1999, p. 3). The efficient market
hypothesis (EMH) has paved the way for economists and mathematicians like Fama
(1965 p. 55) to formulate the random walk theory based on the same fundamental
assumptions of the efficient market hypothesis.
21
This theory states that in an efficient market, the realized stock price fully reflects all
available information in the market, and this price is the fair price for that stock (Fama,
1991, p. 1575). Since stocks constantly trade at their fair value on trades according to the
EMH, there is no under-pricing or overpricing of the stocks, which means that achieving
profits by acquiring undervalued or underpriced stocks and sell stocks for a higher price
is impossible for investors. In other words, no investors could consistently achieve
abnormal returns on the basis that information is available to all traders, and the prices
are quickly adjusted to reflect all new information (Houthakker & Williamson, 1996, p.
25-26). Thus, according to the Efficient market hypothesis, no one or investor can achieve
more profits than the market consistently unless there is an additional risk that has been
acquired or added to the investment.
To sum up the theory
• The efficient market hypothesis EMH states that share prices fully reflect all
information available in the market.
• The EMH assumes that stocks trade transactions occur at their fair market value
on exchanges.
• EMH postulate that investors can use passive investment as a low-cost method for
investment.
• Opponents of EMH believe that it is feasible to outclass the market and that stocks
can differ from their fair market values.
To understand the EMH theory, it is important to differentiate between two important
components of the theory, the concept of informational efficiency and the operational
efficiency of the stock market by considering that operational efficiency is translated in
the ability of investors or traders to perform their traders at the lowest possible costs.
Consistently, Fama (1991, p. 1575), states that a strong form of this efficacy requires the
average costs to make the stock prices collapse all available information should be zero.
Malkiel and Fama (1970), has categorized markets into three classes based on their
operational and informational characteristics with empirical tests of efficiency into
“weak-form”, “semi-strong-form”, and “strong-form” tests. The more information is
available to all parties in the market, the more efficient the market also; the less the cost
of transactions, the more efficient the market will be. In the weak form of the efficient
market, the stock prices will only reflect the historical information, while the semi-strong
form states that stock price will reflect the current and past publicly known information.
Concerning the strong form efficiency, the stock prices will not only reflect the public
and past information relevant to the firm but also the information that just available to the
firm insider traders (Fama, 1970, p 383, Thaker & Jitendra K, 2008, P. 62). Hence, the
deviations from the strong form of market efficiency are raised from the transaction’s
costs and the quick access to the information without barriers (Fama, 1991, p. 1575). In
the semi-strong form of an efficient market, the hypothesis is inconsistent with implied
assumptions of EMH. In the semi-strong form of EMH, the traders may get the
opportunity to receive information earlier than other market participants, so they will have
the advantage to create a temporary monopoly control of information (Schwartz, 1970,
p.422). But the current stock prices in efficient markets will quickly glean and respond to
any new information (Thaker & Jitendra, 2008, p. 62). The previous studies of stock
market efficiency conducted by Fama found little evidence against the EMH (Erdös &
22
Ormos, 2010, p. 1062). However, Houthakker and Williamson (1996, p. 36) argue that
arriving to a strong form of an efficient market is considered too difficult.
3.3 The Random Walk Theory
The theory states that changes in stock prices have the same distribution and are
independent of each other. Hence, it implies that the previous changes, i.e., historical
changes or trends are irrelevant and cannot be utilized to forecast stock prices. The
random walk theory alleges that stock prices are totally random and unpredictable what
consequently makes forecasting methods to be ineffective in the long run.
To understand the logic behind the above conclusion of the Random walk theory, we must
understand its basic assumptions. The random walk theory supposes that the market is
efficient, and the price of stocks reflects all available information. Hence, there is no
investor that can achieve abnormal returns or surpass the market performance without
being committed to additional risk. The second assumption is that the movement of the
stock prices is considered a completely random variable, which means that price changes
are stochastically independent (Cheng & Deets, 1971, p.11), and investors base or initiate
their decisions of buying and selling after a recognized trend has already developed. To
illustrate, if we suppose that there are non-random variations in stock prices. That means
that there is either a steady upward trend or a downward trend. Thus, the speculations and
other financial traders could predict and interpret the stock price movements before it
occurs. Hence, the stock market investors would make their investment decisions
consistent with their expectations. The traders would buy before the stock prices increases
and sell before the stock prices decrease. All these together would make the behaviour of
investors and speculations in response to predictable stock price changes is stylized
nature. Given the large numbers of speculations and traders in the stock market, who are
assumed to be well informed about the predictable changes in stock prices, it follows, no
one can predict the behaviour of stock prices (Miller, 1979, p. 55).
According to Pant and Bishoni (2001, p. 3), the core nature of the random walk theory
states that tracking a variable over a selected specific time is not expected to follow an
identified design. Furthermore, the theory claims that the quality of information collected
to be used in fundamental analysis is often of poor quality, what makes the technical
analysis of the stock prices unreliable.
To sum up the theory
• The theory proposes that fluctuations in stock prices are autonomous and have
the same distribution.
• The theory implies that the historical change or trend of a stock price or market
cannot be used to forecast its future price.
• The theory believes investors must commit to additional risks to surpass market
performance.
• The theory considers technical analysis unreliable as investors make decisions
about buying or selling a stock after the movement has been formed.
• The theory believes fundamental analysis is unreliable because of the poor
quality of information gathered not mention that it could be miscalculated as
well.
• The theory claims that investment advisors contribute with little or no value.
23
According to the above, we can conclude that in an efficient market where no one could
make profits based past information, the stock prices should follow the random walk (Pant
and Bishoni, 2001, p. 3, Miller, 1979, p. 55).
On the other hand, NM Jula and N Jula (2017, p. 878), reported that many studies found
that prices of stock markets are predictable by traders to some extent. This suggests that
traders can develop trading strategies based on some predictable trends of stock prices.
More important, Miller (1979, p. 56), claims that random walk and efficient theories may
hold only for options markets and equity markets. Motivating his viewpoint by the ease
and speed with which options contracts can be achieved and transferred from party to
another. It follows that, if there is cautiousness of a huge rise in stock prices, the informed
traders would short sell big amounts to uninformed traders. Hence, the efficient market
holds in the presence of informed traders.
3.4 No riskless arbitrage
The opportunity has played a great role in providing coherent explanations for the
numerous financial pricing models. Financial academics define arbitrage as the
opportunity or the ability to employ a set of transactions without incurring any costs to
achieve risk-free profits (Tuckman & Jean-Luc, 1992, p. 1283). With attention to that,
one of the key assumptions of Black-Scholes formula is the no riskless arbitrage
argument, which means that when we are able to remove all risks from a portfolio, then
the earnings achieved by this portfolio will be equal to risk-free rate return (Hull, 2009,
p. 237). This assumption would allow us to eliminate costs incurred due to reproducing
portfolios and hence, the price of the option (Hull, 2009, p. 237). On the understanding
that the mathematical aspect, which is utilized in the option pricing theory, i.e., the Black-
Scholes model, is quite advanced.
Generally speaking, the importance of no-arbitrage argument obtained from the
arguments provided by Black and Scholes (1973, p. 637), who argue that if the market
provides accurate options prices. Then, the opportunity to achieve profits by setting up a
portfolio consists of a long position in a stock asset, and a short position in options on
those stock assets will be zero. Grounding on this argument, the option valuation formula
is derived. In essence, the underlying logic behind setting up the riskless portfolio only
from stock and option written on the stock arise from that both stocks and options are
affected by the same underlying risk, i.e., the changes in stock price. Given the short time
of the option life, it follows that each of them is perfectly correlated with others. Thus,
the gain obtained from the positive movement of stock price will be offset by the loss on
the option value. Altogether, the overall value of the portfolio will be the same at the end
of the option life. This means the uncertainty convert to certainty and the earn achieved
by the created portfolio should be equal to the risk-free rate (Hull, 2009, p.285).
3.5 Black and Scholes model
The pricing process of the options in the stock markets represents one of the most essential
aspects of modern financial theory. At the beginning of 1973, Fisher Black and Myron
Scholes have developed an advanced formula for the pricing of European call options,
24
which in turn, represented a real revolution in financial engineering (Dar &Anuradha,
2017, p. 231). The BS model has served as the cornerstone of options pricing in the stock
markets (Lee et al., 2005, p. 330). For the sake of avoiding the domination of the
mathematical aspect, we restricted our presentation on the call option formula, the
assumption of the model, and the general logic behind it.
According to Viot (2005, p. 53) that underlying assumptions of the BS formula are
represented in the following:
• Full competition market, i.e., efficient, complete, and zero transaction costs.
• The impact of taxation on the transaction is neglected.
• All the market participant uses the risk-free rate as borrowing and lending rate.
• The underlying stock did not provide dividends.
• The formula applied to the European options, given that the option can be
exercised only at the expiration date (Black & Scholes, 1973, p. 640).
Following Bodie et al. (2018), the BS formula is formula widely used to price European
call options based on the probability that the option holder will exercise the option.
The Black-Scholes formula for pricing the call option can be written as:
( ) ( )
2
1 d
N
Xe
d
N
S
C r
t
t

−
−
= (3.1)




s
s
t
r
X
S
d








+
+






=
2
ln
2
1 (3.2)
d2 = d1 − σ√T (3.3)
Where the 𝐶𝑡denoted the price of the European call option, i.e., the call premium, S is the
current stock price, X is the exercise price or the strike price when the option expires, r
risk-free rate interest, N(𝑑 ) is the cumulative standard normal distribution that probability
of random variable will be less than d and e represent the exponential term.
According to Black and Scholes (1973, p. 638), the option will be exercised when it is
valuable. This means that the current stock price is greater than the present value of the
strike price. Otherwise, the call option will be worthless, i.e., the option will not be
exercised at the expiration date. If the stock price is much less than the strike price, given
that the value of the call option increases when the present value of strike prices decreases,
and since the present value of the strike price will be decreased when the time to
expiration increases. This follows that the call option will be valuable when the time to
expiration increases, and vice-versa. i.e., the value of the call option will decrease when
the present value of the strike price approach more to stock price. Consistently, Bodie et
al. (2018, p. 715) interpreted BS formula by illustrating that N(𝑑) represents the adjust
25
risk probability that the call option will expire in the money, i.e., the option is valuable.
By suggesting that call option will be pretty sure exercised if both N(𝑑) approach close
to 1. While the option would not be exercised when N(𝑑) approach to 0. The motivation
of this viewpoint by claiming: when N(𝑑) approach close to 1 the call option will be
expressed as 𝑆𝑜 − 𝑋𝑒−𝑟𝑡
, which in turn represents the difference between stock price and
the present value of strike price 𝑆𝑜 − 𝑃𝑉(𝑋). While Nielsen (1992, p.1-2), elaborated that
N(𝑑2) is represent the probability factor that option will expire in the money. While the
term N(𝑑1) is a little bit complicated since it represents that the expected value of the
amount received of conditional obligation on the stock will surpass the stock price in the
expiration date.
3.6 Choice of The Theory
There are several theories that could be used as a theoretical foundation for our study in
terms of their ability to interpret the price fluctuations as well as the underlying causes
behind these fluctuations.
First, the random walk theory could have provided a good basis for this research.
however, Since the random walk theory suggests that constantly outperforming the
market is not possible because of the unpredictability of stocks behaviour, the most
controversial part implied by the theory is that analysts and professional advisors have
minimal or no influence on the value of the portfolio. Hence, the author has promoted a
buy-and-hold investment strategy as the best way to achieve returns. Furthermore,
critiques of this theory advocate, the time spent by each investor is different from other
investors and that due to the massive numbers of investors the possibility of forming a
trend, in the short term is relatively high, and then investors can outperform the market
by strategically buying stocks when the price is low and resell it when the price gets
higher within a short time.
Also, other critiques are concerned with the basis of the whole theory claiming that stocks
and prices follow a pattern and trend, at least in the long term, if not in the short term.
And they justify that due to the enormous number of factors that could affect the stock
price, it makes it impossible to account for each factor and argue that it couldn't be said
that there is no pattern at all because there is no clearly defined pattern the stock prices
follow.
Furthermore, the human behaviour and attitude plays an important role in market
volatility which could not be measured accurately. Consistently, the random walk theory
confirms that current prices reflect the available information in the stock market, and
future fluctuations are mainly reflecting the individual investor expectations about the
future, which are difficult to compress or formulating in determined style or pattern.
Second, the efficient market hypothesis assumes that all the available information are
accessible to all investors at the same time and they are not only perceived and interpreted
exactly the same way but also, investors have the same response, which is a very
dangerous assumption since there are different ways to evaluate stocks as well as
investors' perceptions and actions taken based on different analysis methods implied that
there is definitely a difference in investors decisions. Hence, this concept is considered
paradoxical in this theory.
26
The theory has not considered the vast range of investment options and its relative returns
in investment market. For example, the theory could not give a rational reasoning or
explain how mutual funds generate profits. According to the theory fundamental
assumptions such mutual funds cannot generate profits as every investor has the same
information.
Yet, the efficient market theory is suitable for this study and in the line of random walk
theory. This theory has been chosen, grounding on the fact that mismatching between the
theoretical and market prices of the options increases in the case the efficiency of the
financial market decreases. More clearly, the implied volatility that is derived from the
option price gets closer to the volatility of the underlying assets as the efficiency of the
stock market increases. Consequently, the degree of deviations between implied volatility
and historical volatility of the underlying asset implies the degree of market efficiency.
Alternatively, the efficient market hypothesis confirms that since the transaction costs
decrease due to the efficient competition between the investors and traders in the stock
market. The stock prices will react quickly to any new information. This implies that
options prices (which values derived from stock prices) will also react quickly to this
information and adjusting their prices. Hence, the implied volatility changes in response
to the changes in the options prices in the stock market, which in turn, changes in response
to any new information arriving on the stock market.
No risk-free arbitrage was not appropriate to be adopted in this research due to the nature
of the research question and mathematical and statistical tests intended in this research as
it is more concerned with an area out of this research scope. However, this theory forms
the foundation for Black-Scholes model upon which the implied volatility is calculated.
Hence, it was important to include it in the theoretical literature.
27
4 Fundamental Concepts of Volatility
In this chapter, the literature review introduces an outline of concepts
of volatility and forecasting models. More specifically, those concepts
cover the key dimensions of the problem at hand and the concomitant
research question. We have taken care to support the theoretical
conceptions by some practical explanations in such a way that serves
and interacts with the general objectives of the study.
4.1 Volatility
Volatility Definition and Concept
Generally, differentiating between concepts such as volatility, uncertainty, risk,
variability, and fluctuations would be considered as drawing a fine line to distinguish the
true indications of each concept. Given that uncertainty is describing the possible
outcomes of the event without allocating probability for each outcome. Whereas,
volatility is associated with risk through presenting a gauge of the potential fluctuations
or changes in a specific financial variable such as stock prices, interest rates, inflation
rates, and growth rates (Aizenman & Pinto, 2005, p. 3). Meanwhile, uncertainty is an
implied description of the risk, volatility provides a device to measure this risk. Hence,
the importance of understanding volatility arises from its direct impact on the welfare
costs of risk-averse individuals, along with the implied costs of its negative impact on the
economy overall (Loayza et al., 2007, p. 343).
Estimating volatility of returns is one of the main tools for brokers and practitioners in
the financial stock market to manage their investment portfolios and trading decisions
making (Oya, 2005, p. 940). In practice, "the standard deviation of the return provided
by the variable per unit of time when the return is expressed using continuous
compounding" (Hull, 2012, p. 201). However, the calculated standard deviation based on
a representative sample of data is considered as an estimation for the variability for all
the population from which the sample was drawn (Altman, 2005, p. 903).
As a rule of thumb, Hull (2012, p.201) argues that volatility used in risk management
utilizes one day as a unit of time. In contrast, the calculated volatility for options pricing
used one year as a unit of time, i.e., the volatility is standard deviations (SD) of
continuously compounded return per year. Given the value of the stock price is 𝑃𝑡 at the
end of the day (t). the continuously compounded return at the end of this day is calculated
as the following:
𝑅𝑡 = 𝐿𝑛
𝑃𝑡
𝑃𝑡−1
(4.1)
where 𝑃𝑡−1 denoted the value of the stock price the day before. On the other hand, the
continuously can be calculated as percentage of change in the stock prices. As
following:
28
𝑅𝑡 =
𝑷𝒕−𝑷𝒕−𝟏
𝑷𝒕−𝟏
(4.2)
4.2 Implied Volatility
In Black-Sholes formula, there is one parameter that has a great impact on the value of
the options and the volatility of the underlying asset. However, the impact of volatility is
not observed directly, but it is implied in the price of the option that is written on the stock
(Deng et al., 2008, p. 16). More important, implied volatility in practice reflects the
common view of the market towards the future volatility of stock price (Simon, 2002, p.
960). Consistently, Shaikh and Padhi (2015, p.44) argued that implied volatility is
calculated by comparing option prices resulted from BS model and option prices in the
market. Notably, the option price yield from BS model is relying on the value of volatility
that is inserted in the model. All these together, suggests that higher option price suggest
higher market concern during the residual option life. Hence, implied volatility that is
embodied in the options prices represents the x-ante measure of future volatility.
Conceptually, implied volatility is similar to the yield to maturity (YTM), given that YTM
makes the bond price matching the present value of the expected payments of the option.
Hence, we could derive the YTM by matching the market price of the bond with the
present value of the option (Whaley, 2009, p.98). Worth mentioning, the majority of
options brokers utilize Black-Sholes formula to derive the implied volatility of the
underlying assets. In practice, options traders use Black-Sholes formula to determine the
implied volatility of the underlying assets rather than using it as a pricing tool (Deng et
al., 2007, p. 16-17). This is mainly justified as the volatility included in options prices is
less volatile than the option prices itself (Hull, 2009, p. 297).
In this context, in 1993, CBOE launched implied volatility indices. The well-known VIX
is an index of implied volatility that is derived from 30-days options that are written on
the S&P500 index (Hull, 2015, p.2005). The implied volatility index estimates the
volatility in the short term, derived from options prices that are written on the equity index
(Shaikh & Padhi, 2015, p. 45). For example, the VSTOXX index represents the implied
volatility that is derived from the prices of the options that are written on the Euro Stoxx
50 the European equity index.
As an illustration, Figure 4.1 shows the time plot of the updated VSTOXX index from
January 3, 2005, to December 30, 2019, based on daily observations. From the plot,
during the financial crises in 2008 and the beginning of 2009, the VSTOXX spikes and
experiences high values that interpret the financial market turmoil. Furthermore, the
European financial market was highly volatile during the period 2010-2012. In this
period, the European Sovereign debt crisis has emerged, and the European financial
market showed anxiety about the future that reflected on the values of VSTOXX in this
period.
Implied Volatility and Historical Volatility: An Empirical Evidence About The Content of Information And Forecasting Power
Implied Volatility and Historical Volatility: An Empirical Evidence About The Content of Information And Forecasting Power
Implied Volatility and Historical Volatility: An Empirical Evidence About The Content of Information And Forecasting Power
Implied Volatility and Historical Volatility: An Empirical Evidence About The Content of Information And Forecasting Power
Implied Volatility and Historical Volatility: An Empirical Evidence About The Content of Information And Forecasting Power
Implied Volatility and Historical Volatility: An Empirical Evidence About The Content of Information And Forecasting Power
Implied Volatility and Historical Volatility: An Empirical Evidence About The Content of Information And Forecasting Power
Implied Volatility and Historical Volatility: An Empirical Evidence About The Content of Information And Forecasting Power
Implied Volatility and Historical Volatility: An Empirical Evidence About The Content of Information And Forecasting Power
Implied Volatility and Historical Volatility: An Empirical Evidence About The Content of Information And Forecasting Power
Implied Volatility and Historical Volatility: An Empirical Evidence About The Content of Information And Forecasting Power
Implied Volatility and Historical Volatility: An Empirical Evidence About The Content of Information And Forecasting Power
Implied Volatility and Historical Volatility: An Empirical Evidence About The Content of Information And Forecasting Power
Implied Volatility and Historical Volatility: An Empirical Evidence About The Content of Information And Forecasting Power
Implied Volatility and Historical Volatility: An Empirical Evidence About The Content of Information And Forecasting Power
Implied Volatility and Historical Volatility: An Empirical Evidence About The Content of Information And Forecasting Power
Implied Volatility and Historical Volatility: An Empirical Evidence About The Content of Information And Forecasting Power
Implied Volatility and Historical Volatility: An Empirical Evidence About The Content of Information And Forecasting Power
Implied Volatility and Historical Volatility: An Empirical Evidence About The Content of Information And Forecasting Power
Implied Volatility and Historical Volatility: An Empirical Evidence About The Content of Information And Forecasting Power
Implied Volatility and Historical Volatility: An Empirical Evidence About The Content of Information And Forecasting Power
Implied Volatility and Historical Volatility: An Empirical Evidence About The Content of Information And Forecasting Power
Implied Volatility and Historical Volatility: An Empirical Evidence About The Content of Information And Forecasting Power
Implied Volatility and Historical Volatility: An Empirical Evidence About The Content of Information And Forecasting Power
Implied Volatility and Historical Volatility: An Empirical Evidence About The Content of Information And Forecasting Power
Implied Volatility and Historical Volatility: An Empirical Evidence About The Content of Information And Forecasting Power
Implied Volatility and Historical Volatility: An Empirical Evidence About The Content of Information And Forecasting Power
Implied Volatility and Historical Volatility: An Empirical Evidence About The Content of Information And Forecasting Power
Implied Volatility and Historical Volatility: An Empirical Evidence About The Content of Information And Forecasting Power
Implied Volatility and Historical Volatility: An Empirical Evidence About The Content of Information And Forecasting Power
Implied Volatility and Historical Volatility: An Empirical Evidence About The Content of Information And Forecasting Power
Implied Volatility and Historical Volatility: An Empirical Evidence About The Content of Information And Forecasting Power
Implied Volatility and Historical Volatility: An Empirical Evidence About The Content of Information And Forecasting Power
Implied Volatility and Historical Volatility: An Empirical Evidence About The Content of Information And Forecasting Power
Implied Volatility and Historical Volatility: An Empirical Evidence About The Content of Information And Forecasting Power
Implied Volatility and Historical Volatility: An Empirical Evidence About The Content of Information And Forecasting Power
Implied Volatility and Historical Volatility: An Empirical Evidence About The Content of Information And Forecasting Power
Implied Volatility and Historical Volatility: An Empirical Evidence About The Content of Information And Forecasting Power
Implied Volatility and Historical Volatility: An Empirical Evidence About The Content of Information And Forecasting Power
Implied Volatility and Historical Volatility: An Empirical Evidence About The Content of Information And Forecasting Power
Implied Volatility and Historical Volatility: An Empirical Evidence About The Content of Information And Forecasting Power
Implied Volatility and Historical Volatility: An Empirical Evidence About The Content of Information And Forecasting Power
Implied Volatility and Historical Volatility: An Empirical Evidence About The Content of Information And Forecasting Power
Implied Volatility and Historical Volatility: An Empirical Evidence About The Content of Information And Forecasting Power
Implied Volatility and Historical Volatility: An Empirical Evidence About The Content of Information And Forecasting Power
Implied Volatility and Historical Volatility: An Empirical Evidence About The Content of Information And Forecasting Power
Implied Volatility and Historical Volatility: An Empirical Evidence About The Content of Information And Forecasting Power
Implied Volatility and Historical Volatility: An Empirical Evidence About The Content of Information And Forecasting Power
Implied Volatility and Historical Volatility: An Empirical Evidence About The Content of Information And Forecasting Power
Implied Volatility and Historical Volatility: An Empirical Evidence About The Content of Information And Forecasting Power
Implied Volatility and Historical Volatility: An Empirical Evidence About The Content of Information And Forecasting Power
Implied Volatility and Historical Volatility: An Empirical Evidence About The Content of Information And Forecasting Power
Implied Volatility and Historical Volatility: An Empirical Evidence About The Content of Information And Forecasting Power
Implied Volatility and Historical Volatility: An Empirical Evidence About The Content of Information And Forecasting Power

More Related Content

Similar to Implied Volatility and Historical Volatility: An Empirical Evidence About The Content of Information And Forecasting Power

Kostadinov.T._6346839._MSc.BS
Kostadinov.T._6346839._MSc.BSKostadinov.T._6346839._MSc.BS
Kostadinov.T._6346839._MSc.BSTodor Kostadinov
 
Manual de tecnicas de bioestadística basica
Manual de tecnicas de bioestadística basica Manual de tecnicas de bioestadística basica
Manual de tecnicas de bioestadística basica KristemKertzeif1
 
10.1.1.3.9670
10.1.1.3.967010.1.1.3.9670
10.1.1.3.9670reema2601
 
Optimization of Post-Scoring Classification and Impact on Regulatory Capital ...
Optimization of Post-Scoring Classification and Impact on Regulatory Capital ...Optimization of Post-Scoring Classification and Impact on Regulatory Capital ...
Optimization of Post-Scoring Classification and Impact on Regulatory Capital ...GRATeam
 
938838223-MIT.pdf
938838223-MIT.pdf938838223-MIT.pdf
938838223-MIT.pdfAbdetaImi
 
QbD Model Case Study of VACCINE : A-Vax.
QbD Model Case Study of VACCINE : A-Vax.QbD Model Case Study of VACCINE : A-Vax.
QbD Model Case Study of VACCINE : A-Vax.Shivang Chaudhary
 
Ridge Regression with Conformal Prediction
Ridge Regression with Conformal PredictionRidge Regression with Conformal Prediction
Ridge Regression with Conformal Predictionmgriffiths1966
 
Fill-us-in: Information Asymmetry, Signals and The Role of Updates in Crowdfu...
Fill-us-in: Information Asymmetry, Signals and The Role of Updates in Crowdfu...Fill-us-in: Information Asymmetry, Signals and The Role of Updates in Crowdfu...
Fill-us-in: Information Asymmetry, Signals and The Role of Updates in Crowdfu...CamWebby
 
2012-02-17_Vojtech-Seman_Rigorous_Thesis
2012-02-17_Vojtech-Seman_Rigorous_Thesis2012-02-17_Vojtech-Seman_Rigorous_Thesis
2012-02-17_Vojtech-Seman_Rigorous_ThesisVojtech Seman
 
FinalThesis18112015
FinalThesis18112015FinalThesis18112015
FinalThesis18112015Stefan Mero
 

Similar to Implied Volatility and Historical Volatility: An Empirical Evidence About The Content of Information And Forecasting Power (20)

EvalInvStrats_web
EvalInvStrats_webEvalInvStrats_web
EvalInvStrats_web
 
Kostadinov.T._6346839._MSc.BS
Kostadinov.T._6346839._MSc.BSKostadinov.T._6346839._MSc.BS
Kostadinov.T._6346839._MSc.BS
 
pro-1
pro-1pro-1
pro-1
 
Helicopter Safety Study 3 (HSS-3)
Helicopter Safety Study 3 (HSS-3)Helicopter Safety Study 3 (HSS-3)
Helicopter Safety Study 3 (HSS-3)
 
Manual de tecnicas de bioestadística basica
Manual de tecnicas de bioestadística basica Manual de tecnicas de bioestadística basica
Manual de tecnicas de bioestadística basica
 
10.1.1.3.9670
10.1.1.3.967010.1.1.3.9670
10.1.1.3.9670
 
Samba0804
Samba0804Samba0804
Samba0804
 
Knustthesis
KnustthesisKnustthesis
Knustthesis
 
Optimization of Post-Scoring Classification and Impact on Regulatory Capital ...
Optimization of Post-Scoring Classification and Impact on Regulatory Capital ...Optimization of Post-Scoring Classification and Impact on Regulatory Capital ...
Optimization of Post-Scoring Classification and Impact on Regulatory Capital ...
 
938838223-MIT.pdf
938838223-MIT.pdf938838223-MIT.pdf
938838223-MIT.pdf
 
QbD Model Case Study of VACCINE : A-Vax.
QbD Model Case Study of VACCINE : A-Vax.QbD Model Case Study of VACCINE : A-Vax.
QbD Model Case Study of VACCINE : A-Vax.
 
Bjr cimpa
Bjr cimpaBjr cimpa
Bjr cimpa
 
Ridge Regression with Conformal Prediction
Ridge Regression with Conformal PredictionRidge Regression with Conformal Prediction
Ridge Regression with Conformal Prediction
 
2020-2021 EDA 101 Handout.pdf
2020-2021 EDA 101 Handout.pdf2020-2021 EDA 101 Handout.pdf
2020-2021 EDA 101 Handout.pdf
 
Fill-us-in: Information Asymmetry, Signals and The Role of Updates in Crowdfu...
Fill-us-in: Information Asymmetry, Signals and The Role of Updates in Crowdfu...Fill-us-in: Information Asymmetry, Signals and The Role of Updates in Crowdfu...
Fill-us-in: Information Asymmetry, Signals and The Role of Updates in Crowdfu...
 
2012-02-17_Vojtech-Seman_Rigorous_Thesis
2012-02-17_Vojtech-Seman_Rigorous_Thesis2012-02-17_Vojtech-Seman_Rigorous_Thesis
2012-02-17_Vojtech-Seman_Rigorous_Thesis
 
FinalThesis18112015
FinalThesis18112015FinalThesis18112015
FinalThesis18112015
 
Rand rr2539
Rand rr2539Rand rr2539
Rand rr2539
 
Vol1ch06
Vol1ch06Vol1ch06
Vol1ch06
 
CASE Network Report 41 - Currency Crises in Emerging Markets - Selected Compa...
CASE Network Report 41 - Currency Crises in Emerging Markets - Selected Compa...CASE Network Report 41 - Currency Crises in Emerging Markets - Selected Compa...
CASE Network Report 41 - Currency Crises in Emerging Markets - Selected Compa...
 

More from HaritikaChhatwal1

Visualization IN DATA ANALYTICS IN TIME SERIES
Visualization IN DATA ANALYTICS IN TIME SERIESVisualization IN DATA ANALYTICS IN TIME SERIES
Visualization IN DATA ANALYTICS IN TIME SERIESHaritikaChhatwal1
 
TS Decomposition IN data Time Series Fore
TS Decomposition IN data Time Series ForeTS Decomposition IN data Time Series Fore
TS Decomposition IN data Time Series ForeHaritikaChhatwal1
 
TIMES SERIES FORECASTING ON HISTORICAL DATA IN R
TIMES SERIES FORECASTING ON HISTORICAL DATA  IN RTIMES SERIES FORECASTING ON HISTORICAL DATA  IN R
TIMES SERIES FORECASTING ON HISTORICAL DATA IN RHaritikaChhatwal1
 
SMOTE and K-Fold Cross Validation-Presentation.pptx
SMOTE and K-Fold Cross Validation-Presentation.pptxSMOTE and K-Fold Cross Validation-Presentation.pptx
SMOTE and K-Fold Cross Validation-Presentation.pptxHaritikaChhatwal1
 
Factor Analysis-Presentation DATA ANALYTICS
Factor Analysis-Presentation DATA ANALYTICSFactor Analysis-Presentation DATA ANALYTICS
Factor Analysis-Presentation DATA ANALYTICSHaritikaChhatwal1
 
Additional Reading material-Probability.ppt
Additional Reading material-Probability.pptAdditional Reading material-Probability.ppt
Additional Reading material-Probability.pptHaritikaChhatwal1
 
Decision Tree_Loan Delinquent_Problem Statement.pdf
Decision Tree_Loan Delinquent_Problem Statement.pdfDecision Tree_Loan Delinquent_Problem Statement.pdf
Decision Tree_Loan Delinquent_Problem Statement.pdfHaritikaChhatwal1
 
Frequency Based Classification Algorithms_ important
Frequency Based Classification Algorithms_ importantFrequency Based Classification Algorithms_ important
Frequency Based Classification Algorithms_ importantHaritikaChhatwal1
 
M2W1 - FBS - Descriptive Statistics_Mentoring Presentation.pptx
M2W1 - FBS - Descriptive Statistics_Mentoring Presentation.pptxM2W1 - FBS - Descriptive Statistics_Mentoring Presentation.pptx
M2W1 - FBS - Descriptive Statistics_Mentoring Presentation.pptxHaritikaChhatwal1
 
Introduction to R _IMPORTANT FOR DATA ANALYTICS
Introduction to R _IMPORTANT FOR DATA ANALYTICSIntroduction to R _IMPORTANT FOR DATA ANALYTICS
Introduction to R _IMPORTANT FOR DATA ANALYTICSHaritikaChhatwal1
 
BUSINESS ANALYTICS WITH R SOFTWARE DIAST
BUSINESS ANALYTICS WITH R SOFTWARE DIASTBUSINESS ANALYTICS WITH R SOFTWARE DIAST
BUSINESS ANALYTICS WITH R SOFTWARE DIASTHaritikaChhatwal1
 
SESSION 1-2 [Autosaved] [Autosaved].pptx
SESSION 1-2 [Autosaved] [Autosaved].pptxSESSION 1-2 [Autosaved] [Autosaved].pptx
SESSION 1-2 [Autosaved] [Autosaved].pptxHaritikaChhatwal1
 
WORKSHEET INTRO AND TVM _SESSION 1 AND 2.pdf
WORKSHEET INTRO AND TVM _SESSION 1 AND 2.pdfWORKSHEET INTRO AND TVM _SESSION 1 AND 2.pdf
WORKSHEET INTRO AND TVM _SESSION 1 AND 2.pdfHaritikaChhatwal1
 
Epigeum at ntunive singapore MED900.pptx
Epigeum at ntunive singapore MED900.pptxEpigeum at ntunive singapore MED900.pptx
Epigeum at ntunive singapore MED900.pptxHaritikaChhatwal1
 
MED 900 Correlational Studies online safety sake.pptx
MED 900 Correlational Studies online safety sake.pptxMED 900 Correlational Studies online safety sake.pptx
MED 900 Correlational Studies online safety sake.pptxHaritikaChhatwal1
 
Nw Microsoft PowerPoint Presentation.pptx
Nw Microsoft PowerPoint Presentation.pptxNw Microsoft PowerPoint Presentation.pptx
Nw Microsoft PowerPoint Presentation.pptxHaritikaChhatwal1
 
HOWs CORRELATIONAL STUDIES are performed
HOWs CORRELATIONAL STUDIES are performedHOWs CORRELATIONAL STUDIES are performed
HOWs CORRELATIONAL STUDIES are performedHaritikaChhatwal1
 
CHAPTER 4 -TYPES OF BUSINESS.pptx
CHAPTER 4 -TYPES OF BUSINESS.pptxCHAPTER 4 -TYPES OF BUSINESS.pptx
CHAPTER 4 -TYPES OF BUSINESS.pptxHaritikaChhatwal1
 
CHAPTER 3-ENTREPRENEURSHIP [Autosaved].pptx
CHAPTER 3-ENTREPRENEURSHIP [Autosaved].pptxCHAPTER 3-ENTREPRENEURSHIP [Autosaved].pptx
CHAPTER 3-ENTREPRENEURSHIP [Autosaved].pptxHaritikaChhatwal1
 

More from HaritikaChhatwal1 (20)

Visualization IN DATA ANALYTICS IN TIME SERIES
Visualization IN DATA ANALYTICS IN TIME SERIESVisualization IN DATA ANALYTICS IN TIME SERIES
Visualization IN DATA ANALYTICS IN TIME SERIES
 
TS Decomposition IN data Time Series Fore
TS Decomposition IN data Time Series ForeTS Decomposition IN data Time Series Fore
TS Decomposition IN data Time Series Fore
 
TIMES SERIES FORECASTING ON HISTORICAL DATA IN R
TIMES SERIES FORECASTING ON HISTORICAL DATA  IN RTIMES SERIES FORECASTING ON HISTORICAL DATA  IN R
TIMES SERIES FORECASTING ON HISTORICAL DATA IN R
 
SMOTE and K-Fold Cross Validation-Presentation.pptx
SMOTE and K-Fold Cross Validation-Presentation.pptxSMOTE and K-Fold Cross Validation-Presentation.pptx
SMOTE and K-Fold Cross Validation-Presentation.pptx
 
Factor Analysis-Presentation DATA ANALYTICS
Factor Analysis-Presentation DATA ANALYTICSFactor Analysis-Presentation DATA ANALYTICS
Factor Analysis-Presentation DATA ANALYTICS
 
Additional Reading material-Probability.ppt
Additional Reading material-Probability.pptAdditional Reading material-Probability.ppt
Additional Reading material-Probability.ppt
 
Decision Tree_Loan Delinquent_Problem Statement.pdf
Decision Tree_Loan Delinquent_Problem Statement.pdfDecision Tree_Loan Delinquent_Problem Statement.pdf
Decision Tree_Loan Delinquent_Problem Statement.pdf
 
Frequency Based Classification Algorithms_ important
Frequency Based Classification Algorithms_ importantFrequency Based Classification Algorithms_ important
Frequency Based Classification Algorithms_ important
 
M2W1 - FBS - Descriptive Statistics_Mentoring Presentation.pptx
M2W1 - FBS - Descriptive Statistics_Mentoring Presentation.pptxM2W1 - FBS - Descriptive Statistics_Mentoring Presentation.pptx
M2W1 - FBS - Descriptive Statistics_Mentoring Presentation.pptx
 
Introduction to R _IMPORTANT FOR DATA ANALYTICS
Introduction to R _IMPORTANT FOR DATA ANALYTICSIntroduction to R _IMPORTANT FOR DATA ANALYTICS
Introduction to R _IMPORTANT FOR DATA ANALYTICS
 
BUSINESS ANALYTICS WITH R SOFTWARE DIAST
BUSINESS ANALYTICS WITH R SOFTWARE DIASTBUSINESS ANALYTICS WITH R SOFTWARE DIAST
BUSINESS ANALYTICS WITH R SOFTWARE DIAST
 
SESSION 1-2 [Autosaved] [Autosaved].pptx
SESSION 1-2 [Autosaved] [Autosaved].pptxSESSION 1-2 [Autosaved] [Autosaved].pptx
SESSION 1-2 [Autosaved] [Autosaved].pptx
 
WORKSHEET INTRO AND TVM _SESSION 1 AND 2.pdf
WORKSHEET INTRO AND TVM _SESSION 1 AND 2.pdfWORKSHEET INTRO AND TVM _SESSION 1 AND 2.pdf
WORKSHEET INTRO AND TVM _SESSION 1 AND 2.pdf
 
Epigeum at ntunive singapore MED900.pptx
Epigeum at ntunive singapore MED900.pptxEpigeum at ntunive singapore MED900.pptx
Epigeum at ntunive singapore MED900.pptx
 
MED 900 Correlational Studies online safety sake.pptx
MED 900 Correlational Studies online safety sake.pptxMED 900 Correlational Studies online safety sake.pptx
MED 900 Correlational Studies online safety sake.pptx
 
Nw Microsoft PowerPoint Presentation.pptx
Nw Microsoft PowerPoint Presentation.pptxNw Microsoft PowerPoint Presentation.pptx
Nw Microsoft PowerPoint Presentation.pptx
 
HOWs CORRELATIONAL STUDIES are performed
HOWs CORRELATIONAL STUDIES are performedHOWs CORRELATIONAL STUDIES are performed
HOWs CORRELATIONAL STUDIES are performed
 
DIAS PRESENTATION.pptx
DIAS PRESENTATION.pptxDIAS PRESENTATION.pptx
DIAS PRESENTATION.pptx
 
CHAPTER 4 -TYPES OF BUSINESS.pptx
CHAPTER 4 -TYPES OF BUSINESS.pptxCHAPTER 4 -TYPES OF BUSINESS.pptx
CHAPTER 4 -TYPES OF BUSINESS.pptx
 
CHAPTER 3-ENTREPRENEURSHIP [Autosaved].pptx
CHAPTER 3-ENTREPRENEURSHIP [Autosaved].pptxCHAPTER 3-ENTREPRENEURSHIP [Autosaved].pptx
CHAPTER 3-ENTREPRENEURSHIP [Autosaved].pptx
 

Recently uploaded

Dividend Policy and Dividend Decision Theories.pptx
Dividend Policy and Dividend Decision Theories.pptxDividend Policy and Dividend Decision Theories.pptx
Dividend Policy and Dividend Decision Theories.pptxanshikagoel52
 
Instant Issue Debit Cards - School Designs
Instant Issue Debit Cards - School DesignsInstant Issue Debit Cards - School Designs
Instant Issue Debit Cards - School Designsegoetzinger
 
Q3 2024 Earnings Conference Call and Webcast Slides
Q3 2024 Earnings Conference Call and Webcast SlidesQ3 2024 Earnings Conference Call and Webcast Slides
Q3 2024 Earnings Conference Call and Webcast SlidesMarketing847413
 
Instant Issue Debit Cards - High School Spirit
Instant Issue Debit Cards - High School SpiritInstant Issue Debit Cards - High School Spirit
Instant Issue Debit Cards - High School Spiritegoetzinger
 
Dharavi Russian callg Girls, { 09892124323 } || Call Girl In Mumbai ...
Dharavi Russian callg Girls, { 09892124323 } || Call Girl In Mumbai ...Dharavi Russian callg Girls, { 09892124323 } || Call Girl In Mumbai ...
Dharavi Russian callg Girls, { 09892124323 } || Call Girl In Mumbai ...Pooja Nehwal
 
(DIYA) Bhumkar Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(DIYA) Bhumkar Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(DIYA) Bhumkar Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(DIYA) Bhumkar Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...ranjana rawat
 
Call Girls Service Nagpur Maya Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Maya Call 7001035870 Meet With Nagpur EscortsCall Girls Service Nagpur Maya Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Maya Call 7001035870 Meet With Nagpur Escortsranjana rawat
 
Vip B Aizawl Call Girls #9907093804 Contact Number Escorts Service Aizawl
Vip B Aizawl Call Girls #9907093804 Contact Number Escorts Service AizawlVip B Aizawl Call Girls #9907093804 Contact Number Escorts Service Aizawl
Vip B Aizawl Call Girls #9907093804 Contact Number Escorts Service Aizawlmakika9823
 
VIP Call Girls Service Dilsukhnagar Hyderabad Call +91-8250192130
VIP Call Girls Service Dilsukhnagar Hyderabad Call +91-8250192130VIP Call Girls Service Dilsukhnagar Hyderabad Call +91-8250192130
VIP Call Girls Service Dilsukhnagar Hyderabad Call +91-8250192130Suhani Kapoor
 
VVIP Pune Call Girls Katraj (7001035870) Pune Escorts Nearby with Complete Sa...
VVIP Pune Call Girls Katraj (7001035870) Pune Escorts Nearby with Complete Sa...VVIP Pune Call Girls Katraj (7001035870) Pune Escorts Nearby with Complete Sa...
VVIP Pune Call Girls Katraj (7001035870) Pune Escorts Nearby with Complete Sa...Call Girls in Nagpur High Profile
 
Lundin Gold April 2024 Corporate Presentation v4.pdf
Lundin Gold April 2024 Corporate Presentation v4.pdfLundin Gold April 2024 Corporate Presentation v4.pdf
Lundin Gold April 2024 Corporate Presentation v4.pdfAdnet Communications
 
VIP Kolkata Call Girl Jodhpur Park 👉 8250192130 Available With Room
VIP Kolkata Call Girl Jodhpur Park 👉 8250192130  Available With RoomVIP Kolkata Call Girl Jodhpur Park 👉 8250192130  Available With Room
VIP Kolkata Call Girl Jodhpur Park 👉 8250192130 Available With Roomdivyansh0kumar0
 
VIP Call Girls LB Nagar ( Hyderabad ) Phone 8250192130 | ₹5k To 25k With Room...
VIP Call Girls LB Nagar ( Hyderabad ) Phone 8250192130 | ₹5k To 25k With Room...VIP Call Girls LB Nagar ( Hyderabad ) Phone 8250192130 | ₹5k To 25k With Room...
VIP Call Girls LB Nagar ( Hyderabad ) Phone 8250192130 | ₹5k To 25k With Room...Suhani Kapoor
 
OAT_RI_Ep19 WeighingTheRisks_Apr24_TheYellowMetal.pptx
OAT_RI_Ep19 WeighingTheRisks_Apr24_TheYellowMetal.pptxOAT_RI_Ep19 WeighingTheRisks_Apr24_TheYellowMetal.pptx
OAT_RI_Ep19 WeighingTheRisks_Apr24_TheYellowMetal.pptxhiddenlevers
 
Call US 📞 9892124323 ✅ Kurla Call Girls In Kurla ( Mumbai ) secure service
Call US 📞 9892124323 ✅ Kurla Call Girls In Kurla ( Mumbai ) secure serviceCall US 📞 9892124323 ✅ Kurla Call Girls In Kurla ( Mumbai ) secure service
Call US 📞 9892124323 ✅ Kurla Call Girls In Kurla ( Mumbai ) secure servicePooja Nehwal
 
letter-from-the-chair-to-the-fca-relating-to-british-steel-pensions-scheme-15...
letter-from-the-chair-to-the-fca-relating-to-british-steel-pensions-scheme-15...letter-from-the-chair-to-the-fca-relating-to-british-steel-pensions-scheme-15...
letter-from-the-chair-to-the-fca-relating-to-british-steel-pensions-scheme-15...Henry Tapper
 
Call Girls In Yusuf Sarai Women Seeking Men 9654467111
Call Girls In Yusuf Sarai Women Seeking Men 9654467111Call Girls In Yusuf Sarai Women Seeking Men 9654467111
Call Girls In Yusuf Sarai Women Seeking Men 9654467111Sapana Sha
 
High Class Call Girls Nagpur Grishma Call 7001035870 Meet With Nagpur Escorts
High Class Call Girls Nagpur Grishma Call 7001035870 Meet With Nagpur EscortsHigh Class Call Girls Nagpur Grishma Call 7001035870 Meet With Nagpur Escorts
High Class Call Girls Nagpur Grishma Call 7001035870 Meet With Nagpur Escortsranjana rawat
 

Recently uploaded (20)

Dividend Policy and Dividend Decision Theories.pptx
Dividend Policy and Dividend Decision Theories.pptxDividend Policy and Dividend Decision Theories.pptx
Dividend Policy and Dividend Decision Theories.pptx
 
Instant Issue Debit Cards - School Designs
Instant Issue Debit Cards - School DesignsInstant Issue Debit Cards - School Designs
Instant Issue Debit Cards - School Designs
 
Q3 2024 Earnings Conference Call and Webcast Slides
Q3 2024 Earnings Conference Call and Webcast SlidesQ3 2024 Earnings Conference Call and Webcast Slides
Q3 2024 Earnings Conference Call and Webcast Slides
 
Instant Issue Debit Cards - High School Spirit
Instant Issue Debit Cards - High School SpiritInstant Issue Debit Cards - High School Spirit
Instant Issue Debit Cards - High School Spirit
 
Dharavi Russian callg Girls, { 09892124323 } || Call Girl In Mumbai ...
Dharavi Russian callg Girls, { 09892124323 } || Call Girl In Mumbai ...Dharavi Russian callg Girls, { 09892124323 } || Call Girl In Mumbai ...
Dharavi Russian callg Girls, { 09892124323 } || Call Girl In Mumbai ...
 
Commercial Bank Economic Capsule - April 2024
Commercial Bank Economic Capsule - April 2024Commercial Bank Economic Capsule - April 2024
Commercial Bank Economic Capsule - April 2024
 
(DIYA) Bhumkar Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(DIYA) Bhumkar Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(DIYA) Bhumkar Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(DIYA) Bhumkar Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
 
Call Girls Service Nagpur Maya Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Maya Call 7001035870 Meet With Nagpur EscortsCall Girls Service Nagpur Maya Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Maya Call 7001035870 Meet With Nagpur Escorts
 
Vip B Aizawl Call Girls #9907093804 Contact Number Escorts Service Aizawl
Vip B Aizawl Call Girls #9907093804 Contact Number Escorts Service AizawlVip B Aizawl Call Girls #9907093804 Contact Number Escorts Service Aizawl
Vip B Aizawl Call Girls #9907093804 Contact Number Escorts Service Aizawl
 
VIP Call Girls Service Dilsukhnagar Hyderabad Call +91-8250192130
VIP Call Girls Service Dilsukhnagar Hyderabad Call +91-8250192130VIP Call Girls Service Dilsukhnagar Hyderabad Call +91-8250192130
VIP Call Girls Service Dilsukhnagar Hyderabad Call +91-8250192130
 
VVIP Pune Call Girls Katraj (7001035870) Pune Escorts Nearby with Complete Sa...
VVIP Pune Call Girls Katraj (7001035870) Pune Escorts Nearby with Complete Sa...VVIP Pune Call Girls Katraj (7001035870) Pune Escorts Nearby with Complete Sa...
VVIP Pune Call Girls Katraj (7001035870) Pune Escorts Nearby with Complete Sa...
 
Lundin Gold April 2024 Corporate Presentation v4.pdf
Lundin Gold April 2024 Corporate Presentation v4.pdfLundin Gold April 2024 Corporate Presentation v4.pdf
Lundin Gold April 2024 Corporate Presentation v4.pdf
 
VIP Kolkata Call Girl Jodhpur Park 👉 8250192130 Available With Room
VIP Kolkata Call Girl Jodhpur Park 👉 8250192130  Available With RoomVIP Kolkata Call Girl Jodhpur Park 👉 8250192130  Available With Room
VIP Kolkata Call Girl Jodhpur Park 👉 8250192130 Available With Room
 
VIP Call Girls LB Nagar ( Hyderabad ) Phone 8250192130 | ₹5k To 25k With Room...
VIP Call Girls LB Nagar ( Hyderabad ) Phone 8250192130 | ₹5k To 25k With Room...VIP Call Girls LB Nagar ( Hyderabad ) Phone 8250192130 | ₹5k To 25k With Room...
VIP Call Girls LB Nagar ( Hyderabad ) Phone 8250192130 | ₹5k To 25k With Room...
 
OAT_RI_Ep19 WeighingTheRisks_Apr24_TheYellowMetal.pptx
OAT_RI_Ep19 WeighingTheRisks_Apr24_TheYellowMetal.pptxOAT_RI_Ep19 WeighingTheRisks_Apr24_TheYellowMetal.pptx
OAT_RI_Ep19 WeighingTheRisks_Apr24_TheYellowMetal.pptx
 
Call US 📞 9892124323 ✅ Kurla Call Girls In Kurla ( Mumbai ) secure service
Call US 📞 9892124323 ✅ Kurla Call Girls In Kurla ( Mumbai ) secure serviceCall US 📞 9892124323 ✅ Kurla Call Girls In Kurla ( Mumbai ) secure service
Call US 📞 9892124323 ✅ Kurla Call Girls In Kurla ( Mumbai ) secure service
 
letter-from-the-chair-to-the-fca-relating-to-british-steel-pensions-scheme-15...
letter-from-the-chair-to-the-fca-relating-to-british-steel-pensions-scheme-15...letter-from-the-chair-to-the-fca-relating-to-british-steel-pensions-scheme-15...
letter-from-the-chair-to-the-fca-relating-to-british-steel-pensions-scheme-15...
 
🔝+919953056974 🔝young Delhi Escort service Pusa Road
🔝+919953056974 🔝young Delhi Escort service Pusa Road🔝+919953056974 🔝young Delhi Escort service Pusa Road
🔝+919953056974 🔝young Delhi Escort service Pusa Road
 
Call Girls In Yusuf Sarai Women Seeking Men 9654467111
Call Girls In Yusuf Sarai Women Seeking Men 9654467111Call Girls In Yusuf Sarai Women Seeking Men 9654467111
Call Girls In Yusuf Sarai Women Seeking Men 9654467111
 
High Class Call Girls Nagpur Grishma Call 7001035870 Meet With Nagpur Escorts
High Class Call Girls Nagpur Grishma Call 7001035870 Meet With Nagpur EscortsHigh Class Call Girls Nagpur Grishma Call 7001035870 Meet With Nagpur Escorts
High Class Call Girls Nagpur Grishma Call 7001035870 Meet With Nagpur Escorts
 

Implied Volatility and Historical Volatility: An Empirical Evidence About The Content of Information And Forecasting Power

  • 1. Department of Business Administration Master's Program in Accounting & Master's Program in Finance Master's Thesis in Business Administration III, 30 Credits, Spring 2020 Supervisor: Jörgen Hellström Implied Volatility and Historical Volatility An Empirical Evidence About The Content of Information And Forecasting Power Mohammad Aljaid & Mohammed Diaa Zakaria
  • 2. II ABSTRACT This study examines whether the implied volatility index can provide further information in forecasting volatility than historical volatility using GARCH family models. For this purpose, this research has been conducted to forecast volatility in two main markets the United States of America through its wildly used Standard and Poor’s 500 index and its corresponding volatility index VIX and in Europe through its Euro Stoxx 50 and its corresponding volatility index VSTOXX. To evaluate the in-sample content of information, the conditional variance equations of GARCH (1,1) and EGARCH (1,1) are supplemented by integrating implied volatility as an explanatory variable. The realized volatility has been generated from daily squared returns and was employed as a proxy for true volatility. To examine the out-of-sample forecast performance, one-day-ahead rolling forecasts have been generated, and Mincer–Zarnowitz regression and encompassing regression has been utilized. The predictive power of implied volatility has been assessed based on Mean Square Error (MSE). Findings suggest that the integration of implied volatility as an exogenous variable in the conditional variance of GARCH models enhances the fitness of models and decreases volatility persistency. Furthermore, the significance of the implied volatility coefficient suggests that implied volatility includes pertinent information in illuminating the variation of the conditional variance. Implied volatility is found to be a biased forecast of realized volatility. Empirical findings of encompassing regression tests imply that the implied volatility index does not surpass historical volatility in terms of forecasting future realized volatility. Keywords: Implied Volatilty, Mincer–Zarnowitz regression, GARCH Model, Realized Volatility, Predictive Power.
  • 3. III Table of Contents 1 Introduction............................................................................................................. 1 1.1 Background........................................................................................................ 1 1.2 GARCH and EGARCH Models ........................................................................ 4 1.3 Problematization ................................................................................................ 4 1.4 Knowledge Gap ................................................................................................. 5 1.5 Research Question ............................................................................................. 6 1.6 The Purpose of The Study ................................................................................. 6 1.7 Delimitations...................................................................................................... 7 1.8 Definitions of Terms.......................................................................................... 7 2 Theoretical Methodology........................................................................................ 9 2.1 Choice of Research Area ................................................................................... 9 2.2 Research Philosophy and Perspectives. ........................................................... 10 2.3 The Paradigm................................................................................................... 10 2.4 Ontological Assumptions................................................................................. 11 2.5 Epistemological Assumption ........................................................................... 11 2.6 Axiological Assumption .................................................................................. 12 2.7 Rhetorical Assumption .................................................................................... 13 2.8 Research approach and methodological assumption ....................................... 13 2.9 Research Method ............................................................................................. 14 2.10 Research design ............................................................................................... 15 2.11 Data Collection Method in Quantitative Research .......................................... 16 3 Theoretical Framework ........................................................................................ 18 3.1 Review of Prior Studies ................................................................................... 18 3.2 Efficient Market Hypothesis............................................................................ 20 3.3 The Random Walk Theory............................................................................... 22 3.4 No riskless arbitrage ........................................................................................ 23 3.5 Black and Scholes model................................................................................. 23 3.6 Choice of The Theory...................................................................................... 25 4 Fundamental Concepts of Volatility.................................................................... 27 4.1 Volatility .......................................................................................................... 27 4.2 Implied Volatility............................................................................................. 28
  • 4. IV 4.3 Modelling Volatility ........................................................................................ 30 5 Practical Methodology.......................................................................................... 33 5.1 Research question and hypotheses................................................................... 33 5.2 Data sample...................................................................................................... 35 5.3 Input Parameters .............................................................................................. 38 5.4 Statistical Tests ................................................................................................ 39 5.5 Preliminary Data Analysis............................................................................... 44 5.6 Model Specifications ....................................................................................... 47 6 Empirical Testing and Findings........................................................................... 54 6.1 In sample performance..................................................................................... 54 6.2 Out-of-sample Volatility Forecasts Evaluation ............................................... 62 6.3 Minzer-Zarnowitz regression test .................................................................... 63 6.4 Encompassing Regression Test ....................................................................... 66 7 Conclusion.............................................................................................................. 69 7.1 Conclusion ....................................................................................................... 69 7.2 Further research ............................................................................................... 71 7.3 Implications ..................................................................................................... 71 7.4 Contributions ................................................................................................... 71 7.5 Ethical and Societal Considerations................................................................. 72 7.6 Quality Criteria ................................................................................................ 72
  • 5. V 8 References .............................................................................................................. 75 List of Figures Figure 1.1 Markets wake up with a jolt to the implications of COVID-19, (The economist, 2020). ............................................................................................................. 1 Figure 4.1 Time plot of the VIX index of the European equity index, Jan 2005 to Dec 2019. ............................................................................................................................... 29 Figure 4.2 Time plot of the returns of OMX30 the stock market index for the Stockholm, Jan 2005 to Dec 2019.................................................................................. 30 Figure 5.1 Time plots of stock indexes for each of the European stock market and S&P 500, Jan 2005 to Dec 2019. ............................................................................................ 35 Figure 5.2 Daily co-movement of S&P 500 index and its respective volatility index, i.e., VIX index, Jan 2005 to Dec 2019. ................................................................................. 36 Figure 5.3 Daily co-movement of Euro Stoxx 50 index and its respective volatility index, i.e., VSTOXX index, Jan 2005 to Dec 2019. ...................................................... 37 Figure 5.4 Time-Series plots of returns for each S&P 500 Index and Euro STOXX 50 Index, Jan 2005 to Dec 2019. ......................................................................................... 46 Figure 6.1 Time-series plot of daily squared returns for the Eurozone Stock market, Jan 2005 to Dec 2019............................................................................................................ 63 Figure 6.2 Time-series plot of daily squared returns for the S&P 500 Stock market index, Jan 2005 to Dec 2019. ......................................................................................... 63 List of Tables Table 5.1 Descriptive statistics: January 2005- December 2019.................................... 45 Table 5.2 Different specifications of ARIMA (p,d,q) models Eurozone (Euro Stoxx 50) ........................................................................................................................................ 48 Table 5.3 Different specifications of ARIMA(p,d,q) models USA (S&P 500) ............. 49 Table 6.1 Estimation of GARCH, EGARCH and their corresponding VIX.................. 56 Table 6.2 Estimation of GARCH, EGARCH and their corresponding VIX.................. 59 Table 6.3 Descriptive statistics for realized volatility.................................................... 62 Table 6.4 Mincer- Zarowitz regression test.................................................................... 64 Table 6.5 Encompassing regression test......................................................................... 67 Table 7.1 Encompassing regression test result............................................................... 70
  • 6. VI [THIS PAGE WAS LEFT INTENTIONALLY BLANK]
  • 7. 1 1 Introduction 1.1 Background For so long, the United States has been considered the financial hub of the world and the most influential player in the global markets. As we are writing these words, the world is facing a global health disaster resembled in the COVID-19 outbreak that originated in China. The COVID-19 outbreak has cast its shadows on the global financial markets. The news from Italy, which at the time has the highest cluster of infections outside Asia, has led to an 8.92 % fall in S&P 500 index on February 28 (Clifford T, 2020). Amid this global health crisis, markets are nervous, and investors are looking for a safe resort for their investments. As the uncertainty levels increase, investors are trying to assess the most affected assets by the shock. Declining copper prices is an influential indicator that markets are slowing down. So far, it appears that the most affected stocks are of firms that depend on remote supply chains like carmakers, airlines who had to cancel flights in and out of China as well as many countries that have restricted airlines movement in an effort to limit the virus spread (Economist,2020). Consequently, most interconnected stocks with china have shown a sharp plunge, like oil firms. Meanwhile, gold prices have reached its highest levels in seven years (Lewis, 2020), US dollar exchange rate has declined as the federal reserve bank tries to save the stock market (Smialek & Tankersley, 2020), and the yield on ten-year Treasury bonds declined to reach 1.29% on February 27 (Smith, 2020). Till the moment we do not know the full impact or repercussions of the COVID-19 crisis on the global market, a global sense of unease and discomfort is prevailing among investors expecting a further and deeper crack in the global economy caused by the virus outbreak. Before this crisis, the markets were going well and SEB bank has forecasted a soft landing and a reduced recession (SEB Bank, 2019, p 6). Now, the dominant thoughts among investors are the vague structure of financial instruments that depend on low volatility and the bloated credit market. Figure 1.1 Markets wake up with a jolt to the implications of COVID-19, (The economist, 2020).
  • 8. 2 Also, volatility has increased significantly as shown in Figure 1.1 highlighting the significance of fluctuations and time-varying financial uncertainty and their impacts on other economic variables (Economist, 2020). Generally speaking, uncertainty is defined as the volatility of random variables that is unexpectable from the market participants. It is worth noting that uncertainty may give rise to negative impacts on employment rates, investment, consumptions, as well as decisions of allocating funds (Jurado et al., 2015, p. 1177). In the same line, Kearney (2000, p. 31-32) argues that significant movements in the stock returns are able to imply crucial effects on a firm’s financial decisions, investment decisions, and other economic cycles. Furthermore, the importance of studying volatility dates back to 1987 financial crash, scholars have paid a great attention to volatility in terms of its causes and effects. It is well acknowledged that stockbrokers particularly in the stock market, base their investment decisions not only on domestic information from the local market but also on information generated by global markets. This is due to the accelerating globalization of financial markets caused by the comparatively free flow of commodities and funds alongside upheaval in information technology (Koutmos & Booth, 1995, p. 747). Since rational investment demands quick reaction and accurate evaluation of new information. This implies that the market assessment is reasonable and logical and every stock will be valued to gain rational returns consistent with its risk. This suggests that only anticipated events will shift prices, shocks or unexpected events cannot follow a certain forecast model. Thus, financial shocks are not correlated. According to the efficient market theory, any new information arriving on the market should quickly be incorporated in the stock prices. However, not all events are predictable. Once new information arrives in the stock market prices, adjust rapidly to reflect the new information. The significance of new information arriving in the stock market is assessed by their ability to change the investor’s attitude toward risk and return (Birru & Figlewski, 2010, p. 1). It is important to realize that due to the disastrous impacts of financial crises, which is often represented in the collapse of assets market prices, decreases in real estate prices, fall in stock prices, alongside a collapse in banking sectors, which had been precipitated by increases in unemployment rates and recession in production (Reinhart & Rogoff, 2009, p. 466). Under these circumstances, financial traders and organizations seeking to develop tools to help them to assess what risk they are exposed to, and to what extent expected returns are proportioned with its risk. Perhaps the most important of these tools is bond duration, which in practice is used to measure the sensitivity of the market value of the bond toward the changes in market interest rates. More broadly, the portfolio managers can immunize the value of his portfolio by matching the duration of his portfolio with the desired time interval (Reilly & Sidhu, 1980, p. 58). Then, other tools have emerged in an effort to manage risk including but not limited to the CAPM model, the Black -Scholes for options pricing, stress testing, value-at-risk (VAR), RiskMetrics, and CreditMetrics. Grounding on the basis that evaluation models in practice measure the risk by the market volatility, and the fluctuations in market volatility influence the expected returns on all financial instruments. Accurate measuring of changes in the volatility may lead to a coherence illustration of the variations in the return over time (Harvey & Whaley, 1992, p.43). Observably, the volatilities embodied in options prices are often used to generate information about the future market volatility. Broadly spoken, implied volatilities resemble the expectations of the market traders about the market volatility in the short
  • 9. 3 term. It is worth noting that implied volatility is usually extracted by matching the observed market options prices and the theoretical prices calculated according to the Black-Sholes model (Goncalves & Guidolin, 2006, p. 1591). More importantly, implied volatilities are adequately crucial in a manner that is commonly reported and disclosed in financial news services, and many financial professionals closely follow these reports. For all these together, the accurate estimation of implied volatility and the content information of implied volatility has held great attention in the financial literature (Corrado & Miller, 2005, p. 340). In the background, the implied volatility that is extracted from the options price is generally accepted as a device used to predict the future volatility of the underlying financial assets over the lifetime of the option. More importantly, under the efficient market hypothesis, the implied volatility should contain all the information incorporated in all other variables, which include the historical volatility in explaining the future variation in volatility (Yu et al., 2010, p.1; Christensen & Prabhala, 1998, p.126). Consistently, Becker et al. (2006, p. 139) argue that the implied volatility should absorb all relevant conditioning information in order to be a satisfactory tool to predict the future volatility of underlying assets. Comparatively, Canina and Figlewski (1993, p. 659) argue that the main reason behind the general acceptance of implied volatility as the best predictor of future volatility simply due to being represents the best market expectation of future volatility. By the same token, Day and Lewis (1992, p. 267) considered the volatility that is embodied in the options prices as a measure used to predict the average volatility of underlying assets over the remaining life of the option. Furthermore, they argue that the predictive power of implied volatility reflects the extent of the informational content of options prices is subsumed in their volatilities. Moreover, Fleming (1998, p. 318) argues that, given that all inputs except the volatility of the underlying asset for the option pricing model, they are objectively determined. This follows that the implied volatility subsumes all available information in the options prices only if the model of option pricing is correct, and the market is efficient, i.e., all relevant information is collapsed in the option prices. Hence, that implied volatility overcomes historical volatility in predicting the future volatility of the underlying assets. In contrary, Pong et al. (2004, p. 2541-2542) who start their argument by clarifying that future volatility of stock prices can be forecasted either by studying the past behaviour of this price, i.e., historical information about this price or by utilizing the implied volatility that can be reverted from the value of the option written on this stock. In recent years, Chicago Board Options Exchange (CBOE) in 1993 has launched the CBOE Volatility index, i.e., VIX index, which was primarily constructed to reflect the market’s prediction of future volatility over 30-days for at-the-money S&P 100 Index option prices. The VIX index quickly turned into the principal barometer for US stock market volatility. It is commonly displayed in the leading financial publications such as Wall Street Journal, Barron’s in addition to financial news introduced on CNBC, Bloomberg TV, and CNN/Money, given that VIX index is practically considered as “fear gauge” (CBOE Volatility index, 2019, p. 3). Later, after ten years, the CBOE, on September 22, 2003, has updated the structure design of the implied volatility index and introduced an advance volatility index based on S&p 500 index, which is widely known
  • 10. 4 as VIX (Pati et al., 2018, p. 2553). As an illustration, the driving factor behind using options on S&P 500 rather than S&P 100 has mainly come from the fact that S&P 500 is the primary stock market index in the US not only for options market but also for hedging (Fernandes et al., 2104, p. 1). It is important to realize that the only principal difference between the VIX and stock price index, such as (DJIA), is that the first index (VIX) measures the volatility while the second measures the stock prices (Whaley, 2009, p 98). Furthermore, The VIX is regularly used by investors and reflect their vision about future stock market volatility (Whaley,2000, p. 12). Generally speaking, High VIX values normally indicate anxiety and turmoil in the stock market, resulting in stock prices to decline to unprecedented levels and thereby followed by sharp increases. On the other hand, low VIX values would reflect the optimism within the market traders. Preparing the market for stability and increasing the probability of market progress (Fernandes et al., 2104, p. 2). 1.2 GARCH and EGARCH Models Volatility is measure of the risk when valuing a certain asset; hence, it is crucial to measure volatility connected to asset prices. One of the most distinctive features of the asset’s price volatility is that it is unobservable. Many models have been developed in order to estimate and measure the volatility of asset price as well as providing an accurate estimation for the future volatility of the asset price (Tsay, 2013, p. 176-177). Following with Day and Lewis (1992, p. 268), One of the most famous models to forecast future (expected) market volatility is the GARCH model, which was established on the relationship between historical volatility of the market and the conditional or expected market risk premium. The unique feature of a GARCH forecasting model that its statistical technique that allows asset returns to change over time according to a generalized autoregressive conditional heteroscedasticity. This feature makes the GARCH model the most appealing model for the purpose of this study as the researchers intend to examine the possibility of enhancing this forecasting model. However, the GARCH model does not differentiate between positive and negative shocks in the asset’s price volatility. The researchers consider using EGARCH model with the intention of enhancing predictive power and comparing both models. 1.3 Problematization Given the fact that in practice, traditional finance scholars have employed the past behaviour of asset prices to develop models to help estimate future volatility. This implies that forecasting future volatility is looking back by nature. In turn, the other method to project the future is suggesting utilizing the implied volatility that is embodied in the option prices. The driving factors behind this approach represent the common thought that options prices are basically dependent on future expectations of volatility. Thus, the expected volatility can be estimated by recovering the market option prices to the options prices suggested by the Black-Scholes formula (Dumas et al., 1998, p. 2060). As per Liu et al. (2013, p.861), the reason that implied volatility makes a better estimation for the market uncertainty is that implied volatility includes historical volatility information as well as the investor’s anticipation of future market circumstances reflected through
  • 11. 5 current option prices. It is widely believed that the implied volatility represents the best proxy for future volatility of market’s options. Justifying its point view, that if the options market is completely efficient, the implied volatility embodied in the option prices should reflect the best forecast of future volatility (Bentes, 2017, p. 243). On the contrary, the findings documented by Becker et al. (2006, p. 152) point out that there is no significant relationship between implied volatility and future volatility reported that implied volatility index does not contain further information more than other variables. Alternatively, Christensen and Prabhala (1998, p. 125) concluded that implied volatility surpasses historical volatility in forecasting future volatility. They added that, implied volatility includes all past information, unlike volatility extracted based on historical information. They adopted the following argument, if the options market is efficient the implied volatility should have served as the best estimation of future volatility. They were justifying the difference in their results from the previous studies by using a longer time series of data and the changes in the financial system around the 1987 crash. More recent evidence, Pati et al. (2018, p. 2566) concluded that implied volatility contains relevant information beyond that was suggested by the GARCH family models. However, the volatility that is estimated based on past behaviour of the assets also contains relevant information, and the implied volatility is considered biased forecasting of future volatility. Our problematization in this research stems from that there is a common thought according to previous researches that implied volatility gives or produces a better future forecast of the volatility of the underlying asset over the lifetime of the option (Yu et al., 2010, p.1; Liu et al., 2013,p.861), which means that it is a short term forecast. On the other hand, volatility forecasts based on historical behaviour can produce forecasts for longer periods in the future compared to implied volatility at the expense of the accuracy of the forecasts itself. In other words, we have two estimation approaches, first, implied volatility can give accurate estimations of volatility, but for short term. Second, is historical volatility can give us a longer estimation period but less accurate. The dilemma we have at hand, can we derive or develop an estimation that can combine the advantages of both approaches that can work as a hybrid model for forecasting volatility with a longer period than the lifetime of the option and gives accurate estimation at the same time. 1.4 Knowledge Gap The knowledge gap is basically streamed from the following arguments firstly, due to the growing turmoil and shakiness of the stock market, volatility has attracted great attention from researchers, the core issue of this debate centers around the predictive power of the implied volatility, and the informational content of volatility (Bentes, 2017, p. 241). Furthermore, the empirical evidence about this issue, i.e., the importance of implied volatility in forecasting the future, as well as its ability to absorb all available information, is still inconclusive and non-consensus. By looking back at the previous studies, for example, Latane and Rendleman (1976), Fleming (1998), Christensen and Prabhala (1998), Jiang and Tian (2005), Fung (2007), Chang and Tabak (2007), Pati et al. (2018) who presented results that emphasized with various degrees that implied volatility is better than historical volatility in forecasting future volatility. In contrast, other studies such as Canina & Figlewski (1993), Pong et al. (2004), Becker et al. (2006), Becker et al. (2007), Mohammad & Sakti (2018) documented results that confirm that historical
  • 12. 6 volatility is better than implied volatility in forecasting the future volatility, and contain additional relevant information compared to the volatility that is extracted from option prices. Clearly stated, the competing empirical explanations for the implied volatility phenomenon, are still inconclusive and incomplete. Thus, the failure of literature to reach a definite conclusion about the predictive power of implied volatility has motivated us to conduct this study as we see that further research in this area of the field is still required to settle confusion and contribute to consensus creation. Secondly, we found that one of the basic limitations of implied volatility in forecasting the future is that it is limited to use over the remaining life of the option, i.e., short-term prediction. Thus, increasing the prediction time horizon using implied volatility will form a considerable contribution to finance literature. Thirdly, in line with Pati et al. (2018, p. 2553) we argue that implied volatility that is produced by indexes were not much researched in the literature. Consequently, examining the values of implied volatility indexes in comparison to their corresponding stock market indexes, it will lead to new contributions to finance theory and literature. Considering the difference in depth, liquidity and regulatory structure between the two stock markets it will be interesting to conduct a study using a new set of data of those stock market indices. This may lead to support the reported results and increase the robustness of our study. 1.5 Research Question All the above mentioned discussed led us to develop the following research question: “Can implied volatility be used to improve in- and out-of-sample performance of the GARCH (1,1) and EGARCH (1,1) models? To answer this question we formulated sub-questions as follows: • Does the implied volatility add more information when included as explanatory in the conditional variance equation? • Does the implied volatility have some information about the future realized volatility? • Does the forecasting performance of implied volatility is unbiased? • Does the forecasting performance of implied volatility is better than historical volatility in explaining the future realized volatility (i.e., has the predictive power)? • Does the volatility forecast based on implied volatility encompass all relevant information contained in historical volatility? 1.6 The Purpose of The Study This study seeks to address whether implied volatility includes further information than historical volatility in forecasting stock price movements. And provide new insights into the volatility of the underlying assets that are options written on. Also, we aim in this study to investigate the predictive power of implied volatility by studying two main global indices, namely S&P 500, which represents one of the most followed stock market indices in the USA, and Euro Stoxx 50 which represent the stock market index in the Eurozone.
  • 13. 7 Additionally, we investigate the informational content of implied volatility in comparison to historical volatility. For this purpose, we will examine whether there is biasedness or not. By exploring the extent of the reaction of implied volatility to both bad and good events in the stock markets. Furthermore, the empirical evidence helps in providing more explanations about the implied volatility movements in countries such as Europe and the US, which constitute the overwhelming majority of the financial transactions at the global level. For the purpose of assessing the predictive performance of forecasting models, we will utilize the Mincer-Zarnowitz (MZ) regressions. Equally important, we aim to employ several econometric models in such a way that it gives more robustness of documented results. By all means, this study seeks to provide useful implications of portfolio managers, risk managers, options traders by formulating trading strategies that produce earnings by recognizing mispriced options, as well as academic researchers. 1.7 Delimitations Here, there are some expected limitations that should be noted. First, we will use the data of countries that have introduced the implied volatility indices, so we have limited our search in these countries. Furthermore, the expected findings that would be extracted will be only validated in these countries, or countries with similar economies. Secondly, more importantly, there is a lack of theories in finance that constitute foundations for this type of study. Thus, documented results will be practical implications in nature more than theoretical. 1.8 Definitions of Terms Implied volatility: The volatility that is embodied in option prices, which in usual contain information about the market’s expectations about the future volatility of underlying assets over the lifetime of the option. In practice, implied volatility is calculated by first matching market options prices to Black-Scholes model and theoretical prices. Thereafter, solving for the unknown volatility, based on the prices of underlying assets and information of options contracts (Goncalves and Guidolin, 2006, p. 1591). Implied volatility index VIX: It is like other equity market indexes (S&P 500), (DJIA), (OMXS30), but with the difference that VIX measures the volatility while others measure the prices. It was launched in 1993 for the sake of providing a benchmark of the expected volatility of the underlying assets over the short term. More important, identifying which options contract on volatility can be written. It is worth noting that VIX is considered forward-looking, providing the market’s expectations about the volatility in the short term (Whaley, 2009, p.98). High levels of VIX mean pessimistic expectations, give rise to make the equity prices go down sharply. Whereas low levels of VIX reflect optimism view causing the equity prices to go up (Fernandes et al., 2014, p. 2). Historical volatility: Conceptually, in finance literature, is often used to indicate the deviations from the mean value of all observations over time interval chosen Statistically (Poon & Graner, 2003, p.480).
  • 14. 8 ARCH model: The name of the model stands for (autoregressive conditionally heteroscedastic). It is obtained by arguing that given the stylized facts that are shown by time series of return, such as “volatility clustering”. The general assumption of the classical linear regression model (CLRM) that variance of the error term is constant does not make sense for the time series. Hence, the model assumes that the variance of the error term is non-constant (Brooks, 2014, p. 423, Engel, 1982, p.987). GARCH model: It is considered extended of Engel’s work ARCH by allowing conditional variance to follow the ARMA model (Enders, 2015, p.129). Hence, the general idea behind the GARCH model is that the variance is a function not only for lagged squared return but also is a function for lagged variance as well (Bollerslev, 1986, p. 309).
  • 15. 9 2 Theoretical Methodology This chapter is mainly concerned with the philosophical standpoint of this research. This chapter will deliver the framework used to construct and conduct this research in terms of research philosophy and perspectives, and the paradigm used, the ontological and epistemological stance, and sampling method. Furthermore, this chapter includes motivation for each aspect the researchers have adopted to present the validity of adopted knowledge stance and give a complete picture of the methodological stance. 2.1 Choice of Research Area The estimation and forecasting of future volatility of financial variables have an important contribution to make to the finance theory, due to the growing role of volatility in the various financial applications, such as pricing of financial derivatives, portfolio management, risk management, hedging and asset allocation (Pati et al., 2018, p. 2552). Accordingly, correct estimation of this factor is necessary to sound financial decisions making. Broadly, academics and other financial market practitioners, such as investors, speculators, market regulators, and policymakers, are traditionally building their expectations about volatility based on the past behaviours of the financial variables. To put it differently, using historical behaviours of financial instruments to explain the expected variation and forecast the future changes. Despite the useful contribution of previous approaches, they are by nature reverse looking to expect the future. On the other hand, a second way to explore volatility is to estimate future volatility depending on options prices on the understanding that options pricing basically depends on the expected forward volatility. Accurate options prices mean accurate expected future volatility. Consequently, accurate estimation of the future movements of the financial variables (i.e., the volatility of these variables) basically relies on accurate prices of the options that are written on these financial variables (Dumas et al., 1998, p. 2059). Furthermore, options traders in practice consider the implied volatility as the most important input not only in determining the option price but also in determining the future fluctuations of the underlying asset. Hence, promoting their ability to forecast the future of the stock market (Mohammad & Sakti, p.431, 2018). All these together motivate us to select the implied volatility as a preferable area of research. Further studying is needed for exploring implied volatility for the sake of assessing its role in improving forecasts of the stock market. More important, with an educational background in finance and accounting, we find it both interesting and important to explore and examine the implied volatility, realized volatility, modelling the forecasting volatility, and the content information of implied volatility in the European countries and the USA. Contributing to the ongoing discussion about the information efficiency regarding the implied volatility, compared with realized volatility, as well as seeking to develop the concept of implied volatility that could be implemented in the financial industry.
  • 16. 10 2.2 Research Philosophy and Perspectives. Conducting research on a specific matter is an intricate matter, indeed. Since there would never be one way or angle or perspectives for any subject of study, even there any unified literature on how to define research (Collis & Hussey, 2013, p. 3). In everyday life, the perception and philosophical view of the subjects and events, and its purposes and adopted view would eventually dictate the way we will choose to provide a solution to our problem. Hence, the philosophical and theoretical framework is essential for scientific research and provides the founding stones and guidelines to undertake scientific research (Collis & Hussey, 2013, p. 43), it provides the rationale behind the research as well as a framework to interpret and illuminating the results of our research process as well as understanding the context of social phenomenon (Bryman, 2012, p 19). 2.3 The Paradigm Paradigm is a set of criteria that guidelines how research should be undertaken (Collis & Hussey, 2013, p. 43), according to Saunders et al. ( 2012, p723) it is “A set of basic and taken-for-granted assumptions which underwrite the frame of reference, mode of theorizing and ways of working in which a group operates.” Likewise, Bryman (2003, p 4) has defined paradigm as the set of concepts and conventions that guides or dictates what should be studied, how research should be carried out, and how to illustrate the research findings. Hence, the philosophical assumptions of the research will forge the research strategy and data collection methods and data processing (Saunders et al., 2012, p104). The researcher should be able to argue for the chosen philosophical standpoint and motivate his choice of certain points of view over other philosophical stances (Saunders et al., 2012, p104). Generally, we have two main paradigms interpretivism and positivism. Picturing a positivist researcher, is like looking at a biology or experimental physics scientist in his lab conducting an experiment. Positivist researcher tries to examine observable and measurable factors under controlled environment circumstances, describe and measure the effect of a distinctive changed factor upon the whole phenomenon, and record his feedback. This is natural due to the fact that this approach has been developed within the laboratory passages to serve to understand a natural phenomenon. Positivism perceives the world governed by cause and effect in a mechanical relationship between its components (Saunders et al., 2012, p105). Knowledge originates from experiments to test an existing theory with results that can be scientifically verified (Collis & Hussey, 2013, p. 44). The main trait of this paradigm is that it perceives the social reality as “objective and singular” and examining this reality will not affect this reality (Collis & Hussey, 2013, p. 43). On the other hand, with the rise of social sciences and dealing with variables like culture, language, ethnic orientations and other human characteristics like values, customs, traditions, and religion, the perception of social phenomenon reality is not singular, it is merely subjective since social reality is the perception formed in our minds about an event (Collis & Hussey, 2013, p. 43). For example, the same word, gesture, or behaviour could be considered friendly or hostile depending on the social circumstances. Scientists realized that social behaviour could not be studied in isolation from the society and
  • 17. 11 needed to be studied in its social context and understanding the morale behind this behaviour is essential (Saunders et al., 2012, p107). Positivism could not serve well in this matter as those social and cultural values are relative; it is almost impossible to have a consensus over a specific perception of a social phenomenon. Positivism was deemed inadequate, and interpretivism had emerged to investigate social and behavioural phenomena and integrate the geographical, social, phycological, and cultural aspects. Researcher’s opinions in scientific research became a key factor in interpreting the social phenomenon (Collis & Hussey, 2013, p. 45). In this study, our research question is, “Can implied volatility be used to improve in- and out-of-sample performance of the GARCH (1,1) and EGARCH (1,1) models?” we choose to undertake our research under the positivism approach. Since we are seeking an objective view of the social reality and would enable us to test previous theories to gain further evidence and provide more extensive analysis and scientifically verify previous conflicting researches shreds of evidence, as well as our developed hypothesis. We think that this approach is adequate for our research. Meanwhile, the interpretivism approach is much less appropriate to process such quantitative research and would distort the objectivity of this research. 2.4 Ontological Assumptions Ontology is mainly concerned with the nature of social reality relative to social entities (Bryman, 2012, p 32). This issue is important to present the researcher’s position compared to social reality and social actors. An objective ontological stance would realize the social reality as external to social entities, and couldn’t be influenced by actors, and represent a restraint to them and it exists regardless of actors acknowledge it or not (Collis & Hussey, 2013, p. 49). On the other side, social entities or actors are the building blocks of interpretivism essential (Saunders et al., 2012, p106). Social reality could be influenced by social entities. Hence, social reality is forged and described according to the respective perception of the social entities to each other (Bryman, 2012, p 32). This results in a more subjective view of social reality. In this research, objectivity is more appropriate to use since we intend to conduct an empirical test to scientifically validate previous evidence and test our developed hypothesis. We need to acquire a further comprehension of implied volatility and its relation to change in relation to its underlying assets. We believe that in order to clarify the nature of the relation and behaviour between factors and give our verdict upon the research question, objectivity is necessary. Meanwhile, interpretivism would induce distortions in social reality. Hence, it could trigger flows and limitations in our research process. 2.5 Epistemological Assumption Epistemology not only refers to the know-how to acquire knowledge but also it forms a yardstick that we can use to substantiate this knowledge and create a science (Collis &
  • 18. 12 Hussey, 2013, p. 47). Similarly, according to Lexico Dictionary (www.lexico.com) by oxford, “Epistemology is the theory of knowledge, especially with regard to its methods, validity, and scope as it distinguishes justified belief from opinion.”. In other words, we are talking about the accredited tools that we can use to earn information. For positivist researchers in order to acknowledge the information, it has to be observable, measurable, and verifiable via appropriate empirical tools, Also, the independence of the researcher plays a vital role in maintaining the objectivity of the research (Collis & Hussey, 2013, p. 47). Contrariwise, since the interpretive researcher realizes social reality as subjective, and he interferes in formulating it, the involvement and opinions of the researcher are important to provide his understanding of social reality. Hence, the interpretive researcher is not independent, he also acquires evidence from participants in social reality (Collis & Hussey, 2013, p. 47). In our research, we intend to use a more positivist epistemological approach. We believe that this approach would be appropriate for our empirical testing. Generalizability is a big advantage of the positivist approach, and we are ambitious that our research findings would be useful and generalizable. Needless to say, that our independence is important in this research so, using interpretivism as a general approach would be inadequate. 2.6 Axiological Assumption Axiology refers to the role of the researcher’s values and ethics in the research process (Saunders et al., 2007, p 134). Since our values and ethics vastly contribute not only our view of them but also our vision for right or wrong and eventually, our moral compass that guides our behaviour and actions. The researcher must be aware of the impact of his ethics and values on his research and decide whether a positive or negative effect on his research (Saunders et al., 2007, p 134). Due to that, the interpretive researcher considers the social reality is independent and beyond reach, and he can’t influence it (Collis & Hussey, 2013, p. 49). Positivist researchers often have more proclivity towards neutralising the effect of their values on the research (Collis & Hussey, 2013, p. 48). In contrast, interpretive researchers acknowledge that they have an axiological stance, and research couldn’t be value-free (Bryman, A,2016, p 39). Furthermore, he sees that the involvement of values and morals are essential to provide a meaningful explanation for the research findings (Collis & Hussey, 2013, p. 49). Those values and morals could be for both the researcher as well as the research subjects (Bryman,2012, p 40). In this research, the researchers are concerned with studying the relationships between the change options price and relative to the change in the underlying assets. We see ourselves as independent from the subject of study and keen to present an objective view of our research findings, we consider our research is quasi value-free. Hence, we will have more of an interpretive approach overall our research.
  • 19. 13 2.7 Rhetorical Assumption Following Collis & Hussey (2013, p. 48). Language serves as a medium of exchange to communicate knowledge. Hence, it is important to consider the language used in an academic paper as a complementary part of the whole research paradigm. As we mentioned, positivists see themselves independent from the social reality and the research subject and seeking more objectivity and less bias. This has made it more logical to use a passive voice presenting the ideas and explaining the findings. On the contrary, since interpretivist can involve his ideas and beliefs and moral stances, the interpretivists can use an active voice to express their ideas and also incorporate their values to explain the research findings (Collis & Hussey, 2013, p. 48). Considering the above, we believe that the positivist approach is more optimal to use in our research. It will help us produce more measurable and generalizable results. 2.8 Research approach and methodological assumption Every researcher needs to use theory in his research. Hence, the question about the research approach is a question about the purpose of the research relative to the used theory. This is an important aspect as the researcher needs to be aware of whether he will build a theory or test a theory as this would reflect on the research design (Saunders et al., 2012, p 134). Also, the research approach helps the researcher to make an enlightened decision in terms of research strategy, a distinct comprehension for the study findings, and a sound understanding of the sort of conclusions of the research (Sekaran & Bougie, 2016, p 30). This gives the researcher to choose between the inductive, deductive, abductive approach to conduct his research. Inductive research occurs when a researcher wonders about the reason behind the occurrence of his observation (Sekaran & Bougie, 2016, p 26). Usually, this approach is not based on previously developed theories (Saunders et al., 2016, p 52), and it is usually consistent with generating a theory or evolving the theory to get a richer theoretical perspective to answer his questions (Sekaran & Bougie, 2016, p 26). In the inductive approach or qualitative research, the researcher studies the participant’s morals and the liaison between them (Saunders et al., 2016, p 168). Data collection under the inductive approach doesn’t have a single standard and could be changed during the research as it is an interactive and natural process (Saunders et al., 2016, p 168). Since the inductive approach is an exploratory approach, it may use interviews, either structured or semi- structured to gather information (Collis & Hussey, 2013, p. 4). On the other hand, the deductive approach is “Scientific research pursues a step‐by‐step, logical, organized, and rigorous method (a scientific method) to find a solution to a problem” (Sekaran & Bougie, 2016, p 23). Hence, when the researcher embraces an obvious theoretical stance and begins the process of collecting and analyzing data with the intention to test this theory (Saunders et al., 2016, p 52). Furthermore, a deductive strategy is more often coherent with a quantitative research approach (Bryman,2016, p 32). The deductive approach usually tries to illustrate the cause and effect relationship between concepts and research objects (Saunders et al., 2016, p 146). Unlike the inductive
  • 20. 14 approach, testing theories using a deductive approach needs gathering a huge amount of data, processing this data needs an advanced structured methodology object (Saunders et al., 2016, p 150). Between testing theory with deduction and generating theory with induction lies the abduction research strategy and it is “A form of reasoning with strong ties with induction that grounds social scientific accounts of social worlds the perspectives and meanings of participants in those social worlds” (Bryman,2016, p 709). The abduction came in order to act as a mediator between the previous research approaches is the Abductive research strategy. Likewise, researchers gather data to excavate social phenomenon, describe main themes, and illuminate patterns, to build new or reshape existing theory then, testing this newly reshaped theory (Saunders et al., 2016, p 145). In our research, we will use an abductive approach as it would enable us to customize our data collection method research design as well as data processing. This will make us able to have a crafted research strategy that is tailored to fit our research question regarding the proportionate change implied volatility and its relation to the change underlying asset price. We believe that the abductive approach would give us an agile research strategy to undertake our research. The ability to form a special compounded research strategy combining quantitative and qualitative techniques when needed, would empower us to shed more light on the conflict of evidence we have. 2.9 Research Method There are two main categories to choose from when conducting research quantitative and qualitative methods of research design (Saunders et al., 2016, p 164). The difference between the Quantitative and qualitative research methods emerges from the kind of data itself (Neuman, 2014, p 167), quantitative research usually distinguished by collecting and processing data numerical data meanwhile, qualitative uses nonnumeric data (Johnson & Christensen, 2014, p 82). However, practically speaking, business researches often needs to combine both research methods (Saunders et al., 2016, p 165). Positivism is mostly associated with quantitative approach (Saunders et al., 2016, p 167). Furthermore, the quantitative approach is mostly connected to the deductive approach, which is usually used when the purpose of the research is testing theory. Meanwhile, the qualitative approach is often associated with interpretivism (Saunders et al., 2009, p. 52). The qualitative approach is often used to gain a deeper understanding of the theory itself or to investigate certain phenomenon (Johnson & Christensen, 2014, p 82). In our research, we believe that the quantitative approach will be the main approach in our research. This approach will enable us to use statistical and mathematical models in order to test our formulated hypothesis of the applicability of extending the predictive power of implied volatility over periods that exceeds the lifetime of the options itself.
  • 21. 15 2.10 Research design Research design is often described as the way or the road map that the researcher would follow to answer his research question (Saunders et al., 2016, p 145). Similarly, “is the plan or strategy you will use to investigate your research” (Johnson & Christensen, 2014, p 182). As per Collis & Hussey (2013, p 97), the importance of choosing a research design lies in illuminating the way to the researcher to draw a comprehensive plan that is used as a guidance in answering the research question with the best possible way. And since research approaches are quite contrasting, this could result in “miscommunication and misunderstandings” (Neuman, 2014, p 167), if the research design weren’t accurately chosen. Furthermore, the kind of research relies on the intended goal of the research, which can be separated into four types: descriptive, explanatory, exploratory, and predictive (Collis & Hussey, 2013, p 3). The main goal for descriptive research is to depict a detailed narration of the status of the components of the social phenomenon (Johnson & Christensen, 2014, p 547). Furthermore, Descriptive research provides a description of the characteristics of a situation, social phenomenon, or relationship (Neuman, 2014, p 167). The main target of descriptive research is not explaining the causal relations between variables but to portray the variables and illustrate the relationships between those variables (Johnson & Christensen, 2014, p 547). Usually, the outcome of this kind of study is an overall depiction of the social phenomenon (Neuman, 2014, p 39). After the descriptive research comes the explanatory research design or analytical research. It is thought this kind of research serves as an extension for descriptive research (Collis & Hussey, 2013, p 5). Also, Johnson & Christensen (2014, p 547) define explanatory research as “Testing hypotheses and theories that explain how and why a phenomenon operates as it does.” Hence, as it appears from the definition that the researchers, according to this design not only keen to describe the social phenomenon as it is but also keen to understand and analyse the relationships between its variables, furthermore, studying the mechanism upon which the social phenomenon happens and testing their formulated hypothesis (Neuman, 2014, p 40). Thirdly, comes the exploratory research, as the name speaks for itself, this type of research is to shed light upon a social phenomenon or a certain case that we have little knowledge on and begins investigating this phenomenon (Johnson & Christensen, 2014, p 582). In other words, it’s the kind of research that investigates less understood subjects and aims to produce an initial idea about it (Neuman, 2014, p 38). Thus, the exploratory approach provides a useful way and wonder about the phenomena around us and acquire new knowledge (Saunders et al., 2016, p 174). Furthermore, it provides the researcher with more malleability and flexibility in the research process (Saunders et al., 2016, p 175). The last type of research design is the predictive research, and it refers to “a research focused on predicting the future status of one or more dependent variables based on one or more independent variables” (Johnson & Christensen, 2014, p 870). The main aim of this type of research is to generate a generalization based on a previously formulated
  • 22. 16 hypothesis that could give the ability to predict the future outcome or behaviour of a certain phenomenon (Collis & Hussey, 2013, p 5). In our study, we consider our research as exploratory research. Although there could be many pieces of literature about implied volatility, and it has been proven that it provides a superior result when compared to historical volatility. However, implied volatility measures are only valid during the options life. Consequently, there is so much little we know about the long-term predictive power of it. Hence, we would use quantitative data to investigate and test our formulated hypothesis. Our aim is to provide a piece of empirical evidence on the predictive power of implied volatility in the long term. 2.11 Data Collection Method in Quantitative Research Under the positivist paradigm with a quantitative approach, the first concern is collecting a representative sample that captures exemplify the features in concern of the desired population (Neuman, 2014, p 38). Since it is not possible to study every individual information available in the population, we will have to take a sample. Sample method When sampling for quantitative research, the researcher needs to be aware of the kind of sample that he chooses for his research. There is more than one type of sampling representative sampling and biased sampling (Johnson & Christensen, 2014, p 344). In other words, the researcher has two methods to use in order to collect data, probability or representative sampling, and non-probability sampling (Saunders et al., 2016, p 275). A representative sample is a sample that captures all the features and characteristics of the original population but smaller in size (Johnson, B & Christensen. L, 2014, p 344). Similarly, it is defined as “the chance, or probability, of each case being selected from the population is known and is usually equal for all cases” (Saunders et al., 2009, p. 213). Hence, a representative sample is more and achieve high precision, and efficient to use in quantitative research, also representative sampling is considered to be highly cost- effective relative to the efficiency level it provides (Neuman, 2014, p 247- 248). Alternatively, a biased sample is a sample chosen by the researcher that has a common criterion of selection that makes it consistently different from the original population (Johnson & Christensen, 2014, p 344). Furthermore, nonprobability sampling techniques are considered less demanding compared to probability sampling in terms of the mathematical processing of data (Neuman, 2014, p 248). There are three sub-techniques to reach a nonprobability sampling convenience sampling, snowball sampling, and quota sampling. Convenience sampling is “A non-random sample in which the researcher selects anyone he or she happens to come across.” (Neuman, 2014, p 248). Hence, the selection criteria are accessibility, readiness, and effortless available individuals of the sample (Neuman, 2014, p 248). However, this method is not suitable for all researches as it may generate a very distorted picture of the population (Neuman, 2014, p 248).
  • 23. 17 In contrast, quota sampling is “A non-random sample in which the researcher first identifies general categories into which cases or people will be placed and then selects cases to reach a predetermined number in each category” (Neuman, 2014, p 249). Thus, it’s a designed selection that aims to formulate a more representative picture of the population based on targeting a specific criterion in each group that would eventually resemble the population studied (Johnson & Christensen, 2014, p 363). Although it might seem an accurate method representing all the possible categories in the population seems to be hard to achieve sometimes (Neuman, 2014, p 249) Furthermore, snowball sampling is a method was the researcher asks the participants to refer to individuals from their social network who have a certain criterion the researchers are interested in and who are willing to participate in the study (Johnson & Christensen, 2014, p 365). This method depends on the exponential effect of the human social network to acquire more participants at the beginning then getting bigger by time (Neuman, 2014, p 275). Due to the fact that we will conduct our research on testing the formulated research question, “Can implied volatility be used to improve in- and out-of-sample performance of the GARCH (1,1) and EGARCH (1,1) models?”. Since quantitative research has the ability to employ a single data collection technique, we choose to base our research on a 15 years sample of data gathered from Eikon According to our research we will draw the samples based on the most popular stocks that we can consider as a representative for the studied markets. Hence, we chose two exchange market indexes of standard and poor’s 500 for the USA and Euro Stoxx 50 for Europe. We think that the chosen time interval of 15 years will provide us with enough data to conduct our research. The researchers think that this time span, including the rise and fall of the international financial crisis and its repercussions, would enable us to validate our findings. Literature and source criticism To conduct this research, the researchers only relied on appropriate literature and theories that have a sound basis. We have chosen Umeå University library to be the main source of data. Besides, we have used Google scholar. To make sure of the accuracy and efficiency of the data collected, we have used Eikon to extract our data. Researchers have used articles from the most respectable journals to depend on or to refer to in our research. We have chosen the most relevant and most useful resources available that we could find on implied volatility, the cost of volatility, volatility as a fear gauge, the predictive power of volatility, and mathematical models that we have used. We chose to use GARCH and Black-Scholes models variants and MZ in calculating and processing data as it was the most appropriate model we could find. And can provide our research with a sound basis we can build on. The researchers were keen to provide the most adequate keywords and definitions and terminologies that promote cohesion to the research as well as homogeneity. Citations and references were applied according to Umeå business school thesis manual.
  • 24. 18 3 Theoretical Framework The main goal of this part is to present a review of previous literature about the subject of the research, as well as theories that are in line with our study, coupled with the selected theory part. The choice of theory part involves our motivations arguments behind our choice, as well as the driving factors that support our viewpoint about the theoretical foundation of our study. In other words, the theory is selected in such a way that tailored to our expected findings. 3.1 Review of Prior Studies In the financial literature, there is a common thought amongst researchers that there is a negative relationship between the implied volatility index and stock returns. Also, researchers have found that volatility of stock markets respond differently or disproportionately relative to the direction of the change in stock returns, they found that volatility is more sensitive to adverse return shocks than positive return shocks (Shaikh & Padhi, 2016, p. 28). However, the relationship between implied volatility and realized volatility in terms of informational content and forecasting future is not as clear yet. There is conflicting empirical evidence on the role of implied volatility in forecasting future volatility and its informational content. In the following lines, we present a summary of selected articles that studied those concepts. Latane and Rendleman (1976) suggested that the volatility embodied in the option prices can be calculated based on the assumption that all investors and options traders behave according to the Black-Scholes model. In other words, they assume that the implied volatility is equal to the actual volatility of the underlying stock. They were motivating their assumption by illustrating that the ability to estimate implied volatility correctly relies on a reasonable estimation of the impact of dividend payments, transaction costs, time differences, taxes, etcetera. They were suggesting a methodology based on utilizing the weighted average of implied standard deviations as a tool to measure the future volatility of stock returns. The reported findings confirm that standard deviations of market prices differ from these suggested by the Black and Scholes model. Hence, the prices suggested by the model does not capture the real factors that determine the decisions of options traders. More importantly, they suggested that implied standard deviations are superior in forecasting volatility than those that derived depending on historical volatility. Similarly, Harvey and Whaley (1992) investigated the dynamic behaviour of implied volatility to evaluate market options. They have built their study on implied volatility on the S&P 100 options index. The documented findings indicate that the options market is efficient. They reported that the changes in future volatility could be forecasted using implied volatility. Alternatively, Day and Lewis (1992) examined the informational content of implied volatility that is derived from call options on S&P500 index using GARCH and EGARCH models. They have added the implied volatility as an explanatory
  • 25. 19 variable to the conditional variance equation to capture the informational content of the implied volatility. However, the documented results showed conflicting evidence, that neither implied volatility nor the conditional volatility of GARCH and EGARCH could capture the realized volatility completely. Moreover, Canina and Figlewski (1993) have encountered the previous researches. They have analysed over 17000 OEX call options prices over S&P 100 and provided a surprising result that sharply conflicts with the traditional perception. They reported that implied volatility has very little informational content about the future realized volatility. They also refuted the assumption of irrationality of options trader’s decisions; however, they could not eliminate irrationality completely, and considered traders to be efficient. Consistently, Lamoureux and Lastrapes (1993) explored the behaviour of implied volatility compared to the volatility of the underlying assets. They rejected the joined hypothesis that states that the options markets are efficient and options pricing models are work correctly. On the other hand, Jorion (1995), has presented a supportive findings when he found that implied volatility had more informational content than historical volatility. He investigated the predictive power and the informational content of implied volatility derived upon options on foreign currency from the Chicago Mercantile Exchange. However, implied volatility was found to be prejudiced when it comes to forecasting future volatility. In the same fashion, Xu and Taylor (1995) compared the predictive power for the implied volatility and historical volatility predictions using a sample of four exchange rate spans from 1985 to 1991. They have shown that volatility forecasting extracted based on implied volatility has more relative information than to historical volatility. However, the documented results also showed the failure to reject the hypothesis that past returns have no relative information besides the information provided by options prices. Furthermore, Christensen and Prabhala (1998), supported the findings provided by Xu and Taylor (1995), by confirming that implied volatility subsumes all available information in options markets. They reported that the implied volatility dominates the historical volatility as a forecasting tool for future market volatility. They were enhancing their results by employing larger sample data, justifying the deviations of previous studies by the regime shift around 1987. Fleming (1998), investigated the forecasting performance provided by S&P 100 implied volatility. He has found that despite the increasing trend in biasedness of implied volatility forecasting compared to realized volatility, the implied volatility forecasting subsumes valuable information relative to realized volatility. He suggested that a linear model that can correct the biasedness of implied volatility would be a useful tool in forecasting market expectations of future volatility. Contrary, Pong et al. (2004) provided evidence quite different from previous studies suggesting that historical volatility is more accurate from implied volatility in forecasting future market volatility. Together with that, the informational content of historical volatility in forecasting the future volatility surpasses its corresponding derived from implied volatility. Beker et al. (2006) challenged Fleming (1998) findings. They examined the ability of implied volatility to incorporate all available information based on the VIX index. Their hypothesis was, that implied volatility that extracted conditionally on the options prices
  • 26. 20 should not only reflect available published information, but also represents the best market expectations about the future volatility of underlying assets. They concluded that implied volatility has a reduced efficiency regarding all factors to be used in volatility forecasting. They have shown that although there is a positive correlation between implied volatility index and future volatility of the underlying asset, implied volatility does not improve the forecasting performance of future volatility forecasts. Becker et al. (2007), continued their research to obtain a better comprehension of the informational content of implied volatility. They investigated the VIX index and its relevance to forecasting models, and if VIX index would have more information than the historical volatility. Their research showed that implied volatility does not include any further information than historical volatility. Yu et al. (2010), they have investigated the predictive power of volatility embodied in options that are traded in both OTC and exchange. The documented findings were indicating that the predictive power of implied volatility is superior to the historical volatility regardless of the options were traded in OTC or exchanges. Adversely, Bentes (2017) provided a contradicting evidence in his study about the relationship between implied volatility index and realized volatility. He used monthly data from the BRIC countries. He sought to explore the informational content of implied volatility and its role in explaining the realized volatility. More importantly, He has utilized useful statistical methods such as Autoregressive distributed lag (ADL) and correction error (EC) and paralleled the extracted results with ones that employ the OLS regression methods. The results evidenced that implied volatility is not efficient for any of BRIC countries, albeit it was unbiased for India. Recently, Mohammad and Sakti (2018), they have investigated the informational content of volatility embodied in the call options in the Malaysian stock market. They used daily data for 100 trading days between 2013 and 2014. The documented findings indicate that implied volatility does not contain relevant information about the future market volatility of underlying assets. Furthermore, the forecasting based on implied volatility is less accurate in relative to predictions by historical volatility. Alternatively, Pati et al. (2018) had a piece of conflicting evidence, they studied the informational content of implied volatility index relative to their corresponding stock market indexes for three Asian countries, i.e., India, Australia, and Hong Kong. The documented findings showed that implied volatility is biased in forecasting the future stock market volatility. However, it has contained relevant information that can justify the future realized volatility. 3.2 Efficient Market Hypothesis The efficient market hypothesis had emerged in the mid-60s through the efforts of the Nobel prize winner Paul Samuelson when he promoted Bachelier’s work among other economists with empirical its studies (MacKinlay, & Lo, 1999, p. 3). The efficient market hypothesis (EMH) has paved the way for economists and mathematicians like Fama (1965 p. 55) to formulate the random walk theory based on the same fundamental assumptions of the efficient market hypothesis.
  • 27. 21 This theory states that in an efficient market, the realized stock price fully reflects all available information in the market, and this price is the fair price for that stock (Fama, 1991, p. 1575). Since stocks constantly trade at their fair value on trades according to the EMH, there is no under-pricing or overpricing of the stocks, which means that achieving profits by acquiring undervalued or underpriced stocks and sell stocks for a higher price is impossible for investors. In other words, no investors could consistently achieve abnormal returns on the basis that information is available to all traders, and the prices are quickly adjusted to reflect all new information (Houthakker & Williamson, 1996, p. 25-26). Thus, according to the Efficient market hypothesis, no one or investor can achieve more profits than the market consistently unless there is an additional risk that has been acquired or added to the investment. To sum up the theory • The efficient market hypothesis EMH states that share prices fully reflect all information available in the market. • The EMH assumes that stocks trade transactions occur at their fair market value on exchanges. • EMH postulate that investors can use passive investment as a low-cost method for investment. • Opponents of EMH believe that it is feasible to outclass the market and that stocks can differ from their fair market values. To understand the EMH theory, it is important to differentiate between two important components of the theory, the concept of informational efficiency and the operational efficiency of the stock market by considering that operational efficiency is translated in the ability of investors or traders to perform their traders at the lowest possible costs. Consistently, Fama (1991, p. 1575), states that a strong form of this efficacy requires the average costs to make the stock prices collapse all available information should be zero. Malkiel and Fama (1970), has categorized markets into three classes based on their operational and informational characteristics with empirical tests of efficiency into “weak-form”, “semi-strong-form”, and “strong-form” tests. The more information is available to all parties in the market, the more efficient the market also; the less the cost of transactions, the more efficient the market will be. In the weak form of the efficient market, the stock prices will only reflect the historical information, while the semi-strong form states that stock price will reflect the current and past publicly known information. Concerning the strong form efficiency, the stock prices will not only reflect the public and past information relevant to the firm but also the information that just available to the firm insider traders (Fama, 1970, p 383, Thaker & Jitendra K, 2008, P. 62). Hence, the deviations from the strong form of market efficiency are raised from the transaction’s costs and the quick access to the information without barriers (Fama, 1991, p. 1575). In the semi-strong form of an efficient market, the hypothesis is inconsistent with implied assumptions of EMH. In the semi-strong form of EMH, the traders may get the opportunity to receive information earlier than other market participants, so they will have the advantage to create a temporary monopoly control of information (Schwartz, 1970, p.422). But the current stock prices in efficient markets will quickly glean and respond to any new information (Thaker & Jitendra, 2008, p. 62). The previous studies of stock market efficiency conducted by Fama found little evidence against the EMH (Erdös &
  • 28. 22 Ormos, 2010, p. 1062). However, Houthakker and Williamson (1996, p. 36) argue that arriving to a strong form of an efficient market is considered too difficult. 3.3 The Random Walk Theory The theory states that changes in stock prices have the same distribution and are independent of each other. Hence, it implies that the previous changes, i.e., historical changes or trends are irrelevant and cannot be utilized to forecast stock prices. The random walk theory alleges that stock prices are totally random and unpredictable what consequently makes forecasting methods to be ineffective in the long run. To understand the logic behind the above conclusion of the Random walk theory, we must understand its basic assumptions. The random walk theory supposes that the market is efficient, and the price of stocks reflects all available information. Hence, there is no investor that can achieve abnormal returns or surpass the market performance without being committed to additional risk. The second assumption is that the movement of the stock prices is considered a completely random variable, which means that price changes are stochastically independent (Cheng & Deets, 1971, p.11), and investors base or initiate their decisions of buying and selling after a recognized trend has already developed. To illustrate, if we suppose that there are non-random variations in stock prices. That means that there is either a steady upward trend or a downward trend. Thus, the speculations and other financial traders could predict and interpret the stock price movements before it occurs. Hence, the stock market investors would make their investment decisions consistent with their expectations. The traders would buy before the stock prices increases and sell before the stock prices decrease. All these together would make the behaviour of investors and speculations in response to predictable stock price changes is stylized nature. Given the large numbers of speculations and traders in the stock market, who are assumed to be well informed about the predictable changes in stock prices, it follows, no one can predict the behaviour of stock prices (Miller, 1979, p. 55). According to Pant and Bishoni (2001, p. 3), the core nature of the random walk theory states that tracking a variable over a selected specific time is not expected to follow an identified design. Furthermore, the theory claims that the quality of information collected to be used in fundamental analysis is often of poor quality, what makes the technical analysis of the stock prices unreliable. To sum up the theory • The theory proposes that fluctuations in stock prices are autonomous and have the same distribution. • The theory implies that the historical change or trend of a stock price or market cannot be used to forecast its future price. • The theory believes investors must commit to additional risks to surpass market performance. • The theory considers technical analysis unreliable as investors make decisions about buying or selling a stock after the movement has been formed. • The theory believes fundamental analysis is unreliable because of the poor quality of information gathered not mention that it could be miscalculated as well. • The theory claims that investment advisors contribute with little or no value.
  • 29. 23 According to the above, we can conclude that in an efficient market where no one could make profits based past information, the stock prices should follow the random walk (Pant and Bishoni, 2001, p. 3, Miller, 1979, p. 55). On the other hand, NM Jula and N Jula (2017, p. 878), reported that many studies found that prices of stock markets are predictable by traders to some extent. This suggests that traders can develop trading strategies based on some predictable trends of stock prices. More important, Miller (1979, p. 56), claims that random walk and efficient theories may hold only for options markets and equity markets. Motivating his viewpoint by the ease and speed with which options contracts can be achieved and transferred from party to another. It follows that, if there is cautiousness of a huge rise in stock prices, the informed traders would short sell big amounts to uninformed traders. Hence, the efficient market holds in the presence of informed traders. 3.4 No riskless arbitrage The opportunity has played a great role in providing coherent explanations for the numerous financial pricing models. Financial academics define arbitrage as the opportunity or the ability to employ a set of transactions without incurring any costs to achieve risk-free profits (Tuckman & Jean-Luc, 1992, p. 1283). With attention to that, one of the key assumptions of Black-Scholes formula is the no riskless arbitrage argument, which means that when we are able to remove all risks from a portfolio, then the earnings achieved by this portfolio will be equal to risk-free rate return (Hull, 2009, p. 237). This assumption would allow us to eliminate costs incurred due to reproducing portfolios and hence, the price of the option (Hull, 2009, p. 237). On the understanding that the mathematical aspect, which is utilized in the option pricing theory, i.e., the Black- Scholes model, is quite advanced. Generally speaking, the importance of no-arbitrage argument obtained from the arguments provided by Black and Scholes (1973, p. 637), who argue that if the market provides accurate options prices. Then, the opportunity to achieve profits by setting up a portfolio consists of a long position in a stock asset, and a short position in options on those stock assets will be zero. Grounding on this argument, the option valuation formula is derived. In essence, the underlying logic behind setting up the riskless portfolio only from stock and option written on the stock arise from that both stocks and options are affected by the same underlying risk, i.e., the changes in stock price. Given the short time of the option life, it follows that each of them is perfectly correlated with others. Thus, the gain obtained from the positive movement of stock price will be offset by the loss on the option value. Altogether, the overall value of the portfolio will be the same at the end of the option life. This means the uncertainty convert to certainty and the earn achieved by the created portfolio should be equal to the risk-free rate (Hull, 2009, p.285). 3.5 Black and Scholes model The pricing process of the options in the stock markets represents one of the most essential aspects of modern financial theory. At the beginning of 1973, Fisher Black and Myron Scholes have developed an advanced formula for the pricing of European call options,
  • 30. 24 which in turn, represented a real revolution in financial engineering (Dar &Anuradha, 2017, p. 231). The BS model has served as the cornerstone of options pricing in the stock markets (Lee et al., 2005, p. 330). For the sake of avoiding the domination of the mathematical aspect, we restricted our presentation on the call option formula, the assumption of the model, and the general logic behind it. According to Viot (2005, p. 53) that underlying assumptions of the BS formula are represented in the following: • Full competition market, i.e., efficient, complete, and zero transaction costs. • The impact of taxation on the transaction is neglected. • All the market participant uses the risk-free rate as borrowing and lending rate. • The underlying stock did not provide dividends. • The formula applied to the European options, given that the option can be exercised only at the expiration date (Black & Scholes, 1973, p. 640). Following Bodie et al. (2018), the BS formula is formula widely used to price European call options based on the probability that the option holder will exercise the option. The Black-Scholes formula for pricing the call option can be written as: ( ) ( ) 2 1 d N Xe d N S C r t t  − − = (3.1)     s s t r X S d         + +       = 2 ln 2 1 (3.2) d2 = d1 − σ√T (3.3) Where the 𝐶𝑡denoted the price of the European call option, i.e., the call premium, S is the current stock price, X is the exercise price or the strike price when the option expires, r risk-free rate interest, N(𝑑 ) is the cumulative standard normal distribution that probability of random variable will be less than d and e represent the exponential term. According to Black and Scholes (1973, p. 638), the option will be exercised when it is valuable. This means that the current stock price is greater than the present value of the strike price. Otherwise, the call option will be worthless, i.e., the option will not be exercised at the expiration date. If the stock price is much less than the strike price, given that the value of the call option increases when the present value of strike prices decreases, and since the present value of the strike price will be decreased when the time to expiration increases. This follows that the call option will be valuable when the time to expiration increases, and vice-versa. i.e., the value of the call option will decrease when the present value of the strike price approach more to stock price. Consistently, Bodie et al. (2018, p. 715) interpreted BS formula by illustrating that N(𝑑) represents the adjust
  • 31. 25 risk probability that the call option will expire in the money, i.e., the option is valuable. By suggesting that call option will be pretty sure exercised if both N(𝑑) approach close to 1. While the option would not be exercised when N(𝑑) approach to 0. The motivation of this viewpoint by claiming: when N(𝑑) approach close to 1 the call option will be expressed as 𝑆𝑜 − 𝑋𝑒−𝑟𝑡 , which in turn represents the difference between stock price and the present value of strike price 𝑆𝑜 − 𝑃𝑉(𝑋). While Nielsen (1992, p.1-2), elaborated that N(𝑑2) is represent the probability factor that option will expire in the money. While the term N(𝑑1) is a little bit complicated since it represents that the expected value of the amount received of conditional obligation on the stock will surpass the stock price in the expiration date. 3.6 Choice of The Theory There are several theories that could be used as a theoretical foundation for our study in terms of their ability to interpret the price fluctuations as well as the underlying causes behind these fluctuations. First, the random walk theory could have provided a good basis for this research. however, Since the random walk theory suggests that constantly outperforming the market is not possible because of the unpredictability of stocks behaviour, the most controversial part implied by the theory is that analysts and professional advisors have minimal or no influence on the value of the portfolio. Hence, the author has promoted a buy-and-hold investment strategy as the best way to achieve returns. Furthermore, critiques of this theory advocate, the time spent by each investor is different from other investors and that due to the massive numbers of investors the possibility of forming a trend, in the short term is relatively high, and then investors can outperform the market by strategically buying stocks when the price is low and resell it when the price gets higher within a short time. Also, other critiques are concerned with the basis of the whole theory claiming that stocks and prices follow a pattern and trend, at least in the long term, if not in the short term. And they justify that due to the enormous number of factors that could affect the stock price, it makes it impossible to account for each factor and argue that it couldn't be said that there is no pattern at all because there is no clearly defined pattern the stock prices follow. Furthermore, the human behaviour and attitude plays an important role in market volatility which could not be measured accurately. Consistently, the random walk theory confirms that current prices reflect the available information in the stock market, and future fluctuations are mainly reflecting the individual investor expectations about the future, which are difficult to compress or formulating in determined style or pattern. Second, the efficient market hypothesis assumes that all the available information are accessible to all investors at the same time and they are not only perceived and interpreted exactly the same way but also, investors have the same response, which is a very dangerous assumption since there are different ways to evaluate stocks as well as investors' perceptions and actions taken based on different analysis methods implied that there is definitely a difference in investors decisions. Hence, this concept is considered paradoxical in this theory.
  • 32. 26 The theory has not considered the vast range of investment options and its relative returns in investment market. For example, the theory could not give a rational reasoning or explain how mutual funds generate profits. According to the theory fundamental assumptions such mutual funds cannot generate profits as every investor has the same information. Yet, the efficient market theory is suitable for this study and in the line of random walk theory. This theory has been chosen, grounding on the fact that mismatching between the theoretical and market prices of the options increases in the case the efficiency of the financial market decreases. More clearly, the implied volatility that is derived from the option price gets closer to the volatility of the underlying assets as the efficiency of the stock market increases. Consequently, the degree of deviations between implied volatility and historical volatility of the underlying asset implies the degree of market efficiency. Alternatively, the efficient market hypothesis confirms that since the transaction costs decrease due to the efficient competition between the investors and traders in the stock market. The stock prices will react quickly to any new information. This implies that options prices (which values derived from stock prices) will also react quickly to this information and adjusting their prices. Hence, the implied volatility changes in response to the changes in the options prices in the stock market, which in turn, changes in response to any new information arriving on the stock market. No risk-free arbitrage was not appropriate to be adopted in this research due to the nature of the research question and mathematical and statistical tests intended in this research as it is more concerned with an area out of this research scope. However, this theory forms the foundation for Black-Scholes model upon which the implied volatility is calculated. Hence, it was important to include it in the theoretical literature.
  • 33. 27 4 Fundamental Concepts of Volatility In this chapter, the literature review introduces an outline of concepts of volatility and forecasting models. More specifically, those concepts cover the key dimensions of the problem at hand and the concomitant research question. We have taken care to support the theoretical conceptions by some practical explanations in such a way that serves and interacts with the general objectives of the study. 4.1 Volatility Volatility Definition and Concept Generally, differentiating between concepts such as volatility, uncertainty, risk, variability, and fluctuations would be considered as drawing a fine line to distinguish the true indications of each concept. Given that uncertainty is describing the possible outcomes of the event without allocating probability for each outcome. Whereas, volatility is associated with risk through presenting a gauge of the potential fluctuations or changes in a specific financial variable such as stock prices, interest rates, inflation rates, and growth rates (Aizenman & Pinto, 2005, p. 3). Meanwhile, uncertainty is an implied description of the risk, volatility provides a device to measure this risk. Hence, the importance of understanding volatility arises from its direct impact on the welfare costs of risk-averse individuals, along with the implied costs of its negative impact on the economy overall (Loayza et al., 2007, p. 343). Estimating volatility of returns is one of the main tools for brokers and practitioners in the financial stock market to manage their investment portfolios and trading decisions making (Oya, 2005, p. 940). In practice, "the standard deviation of the return provided by the variable per unit of time when the return is expressed using continuous compounding" (Hull, 2012, p. 201). However, the calculated standard deviation based on a representative sample of data is considered as an estimation for the variability for all the population from which the sample was drawn (Altman, 2005, p. 903). As a rule of thumb, Hull (2012, p.201) argues that volatility used in risk management utilizes one day as a unit of time. In contrast, the calculated volatility for options pricing used one year as a unit of time, i.e., the volatility is standard deviations (SD) of continuously compounded return per year. Given the value of the stock price is 𝑃𝑡 at the end of the day (t). the continuously compounded return at the end of this day is calculated as the following: 𝑅𝑡 = 𝐿𝑛 𝑃𝑡 𝑃𝑡−1 (4.1) where 𝑃𝑡−1 denoted the value of the stock price the day before. On the other hand, the continuously can be calculated as percentage of change in the stock prices. As following:
  • 34. 28 𝑅𝑡 = 𝑷𝒕−𝑷𝒕−𝟏 𝑷𝒕−𝟏 (4.2) 4.2 Implied Volatility In Black-Sholes formula, there is one parameter that has a great impact on the value of the options and the volatility of the underlying asset. However, the impact of volatility is not observed directly, but it is implied in the price of the option that is written on the stock (Deng et al., 2008, p. 16). More important, implied volatility in practice reflects the common view of the market towards the future volatility of stock price (Simon, 2002, p. 960). Consistently, Shaikh and Padhi (2015, p.44) argued that implied volatility is calculated by comparing option prices resulted from BS model and option prices in the market. Notably, the option price yield from BS model is relying on the value of volatility that is inserted in the model. All these together, suggests that higher option price suggest higher market concern during the residual option life. Hence, implied volatility that is embodied in the options prices represents the x-ante measure of future volatility. Conceptually, implied volatility is similar to the yield to maturity (YTM), given that YTM makes the bond price matching the present value of the expected payments of the option. Hence, we could derive the YTM by matching the market price of the bond with the present value of the option (Whaley, 2009, p.98). Worth mentioning, the majority of options brokers utilize Black-Sholes formula to derive the implied volatility of the underlying assets. In practice, options traders use Black-Sholes formula to determine the implied volatility of the underlying assets rather than using it as a pricing tool (Deng et al., 2007, p. 16-17). This is mainly justified as the volatility included in options prices is less volatile than the option prices itself (Hull, 2009, p. 297). In this context, in 1993, CBOE launched implied volatility indices. The well-known VIX is an index of implied volatility that is derived from 30-days options that are written on the S&P500 index (Hull, 2015, p.2005). The implied volatility index estimates the volatility in the short term, derived from options prices that are written on the equity index (Shaikh & Padhi, 2015, p. 45). For example, the VSTOXX index represents the implied volatility that is derived from the prices of the options that are written on the Euro Stoxx 50 the European equity index. As an illustration, Figure 4.1 shows the time plot of the updated VSTOXX index from January 3, 2005, to December 30, 2019, based on daily observations. From the plot, during the financial crises in 2008 and the beginning of 2009, the VSTOXX spikes and experiences high values that interpret the financial market turmoil. Furthermore, the European financial market was highly volatile during the period 2010-2012. In this period, the European Sovereign debt crisis has emerged, and the European financial market showed anxiety about the future that reflected on the values of VSTOXX in this period.