The document discusses various concepts related to time series analysis and volatility modeling:
1) It defines volatility, risk, and the difference between the two. It also describes how volatility can be measured.
2) It covers the concepts of historical volatility, implied volatility from options prices, and volatility indices. It also defines intraday volatility.
3) It discusses the concept of stationarity in time series and various tests to check for stationarity like the Dickey-Fuller test, Phillips-Perron test, and KPSS test.
4) It introduces the ARCH and GARCH models for modeling conditional heteroscedasticity or time-varying volatility observed in financial time series.
Superior performance by combining Rsik Parity with Momentum?Wilhelm Fritsche
This document examines different strategies for global asset allocation between equities, bonds, commodities and real estate. It finds that applying trend following rules substantially improves risk-adjusted performance compared to traditional buy-and-hold portfolios. It also finds trend following to be superior to risk parity approaches. Combining momentum strategies with trend following further improves returns while reducing volatility and drawdowns. A flexible approach that allocates capital based on volatility-weighted momentum rankings of 95 markets produces attractive, consistent risk-adjusted returns.
This document provides an update to a previous study on the performance of passive and active collar strategies applied to the Powershares QQQ ETF (QQQ). The update extends the analysis period through September 2010. It finds that during market declines like the tech bubble and credit crisis, collar strategies provided downside protection and strong returns compared to a long position in QQQ. However, collars underperformed during strong market climbs. The document also analyzes applying collar strategies to a small cap mutual fund and finds similar beneficial results. It concludes that active collars, which dynamically adjust based on momentum, volatility, and macroeconomic signals, tended to outperform passive collars both in-sample and out-of-sample.
The Predictive Power of Intraday-Data Volatility Forecasting Models: A Case S...inventionjournals
The purpose of this study was to compare the predictive power of various volatility forecasting models. Using intraday high-frequency data, this study investigated the influence of time frequency on the predictive power of a volatility forecasting model. The empirical results revealed that the realized volatility increased when the time frequency of forecasts reduced. The overall results showed that when the forecast range was 1 day, among various volatility forecasting models, the autoregressive moving average-generalized autoregressive conditional heteroskedasticity(1, 1) model presented the optimal forecasting performance and the implied volatility model presented the worst forecasting performance for all time frequencies.
This document discusses a case study that analyzed over 6,400 rules for trading the S&P 500 using data mining techniques. It describes how data mining bias can lead to overstating a rule's expected future performance. The case study used statistical inference tests like White's reality check and Masters' Monte-Carlo permutation method to minimize this bias. It details the various rule types analyzed, including trends, extremes/transitions, and divergence. Input data series included raw time series, indicators, and other preprocessed data. The goal was to identify rules with genuine predictive power and evaluate their statistical and practical significance.
The document discusses nonlinear models for volatility and correlation in financial data. It introduces the autoregressive conditional heteroscedasticity (ARCH) model and generalized ARCH (GARCH) models, which allow the variance of errors to depend on previous values. The ARCH model specifies the variance as a function of past squared errors. The GARCH model extends this to include the past variance, addressing issues with the ARCH model like how to determine the order q. Tests for ARCH effects and specifications of ARCH and GARCH models are also provided.
This document summarizes a case study analyzing rules for mining data from the S&P 500 stock market index. It discusses potential biases in backtesting rules to select superior performers and statistical methods to minimize these biases. Specific topics covered include data mining biases, techniques to avoid data snooping bias by splitting samples, defining the case study statistically, transforming data series into market positions with rules, constructing technical analysis indicators from price and volume data, and categories of rules examined including trends, extremes/transitions, and divergence.
Superior performance by combining Rsik Parity with Momentum?Wilhelm Fritsche
This document examines different strategies for global asset allocation between equities, bonds, commodities and real estate. It finds that applying trend following rules substantially improves risk-adjusted performance compared to traditional buy-and-hold portfolios. It also finds trend following to be superior to risk parity approaches. Combining momentum strategies with trend following further improves returns while reducing volatility and drawdowns. A flexible approach that allocates capital based on volatility-weighted momentum rankings of 95 markets produces attractive, consistent risk-adjusted returns.
This document provides an update to a previous study on the performance of passive and active collar strategies applied to the Powershares QQQ ETF (QQQ). The update extends the analysis period through September 2010. It finds that during market declines like the tech bubble and credit crisis, collar strategies provided downside protection and strong returns compared to a long position in QQQ. However, collars underperformed during strong market climbs. The document also analyzes applying collar strategies to a small cap mutual fund and finds similar beneficial results. It concludes that active collars, which dynamically adjust based on momentum, volatility, and macroeconomic signals, tended to outperform passive collars both in-sample and out-of-sample.
The Predictive Power of Intraday-Data Volatility Forecasting Models: A Case S...inventionjournals
The purpose of this study was to compare the predictive power of various volatility forecasting models. Using intraday high-frequency data, this study investigated the influence of time frequency on the predictive power of a volatility forecasting model. The empirical results revealed that the realized volatility increased when the time frequency of forecasts reduced. The overall results showed that when the forecast range was 1 day, among various volatility forecasting models, the autoregressive moving average-generalized autoregressive conditional heteroskedasticity(1, 1) model presented the optimal forecasting performance and the implied volatility model presented the worst forecasting performance for all time frequencies.
This document discusses a case study that analyzed over 6,400 rules for trading the S&P 500 using data mining techniques. It describes how data mining bias can lead to overstating a rule's expected future performance. The case study used statistical inference tests like White's reality check and Masters' Monte-Carlo permutation method to minimize this bias. It details the various rule types analyzed, including trends, extremes/transitions, and divergence. Input data series included raw time series, indicators, and other preprocessed data. The goal was to identify rules with genuine predictive power and evaluate their statistical and practical significance.
The document discusses nonlinear models for volatility and correlation in financial data. It introduces the autoregressive conditional heteroscedasticity (ARCH) model and generalized ARCH (GARCH) models, which allow the variance of errors to depend on previous values. The ARCH model specifies the variance as a function of past squared errors. The GARCH model extends this to include the past variance, addressing issues with the ARCH model like how to determine the order q. Tests for ARCH effects and specifications of ARCH and GARCH models are also provided.
This document summarizes a case study analyzing rules for mining data from the S&P 500 stock market index. It discusses potential biases in backtesting rules to select superior performers and statistical methods to minimize these biases. Specific topics covered include data mining biases, techniques to avoid data snooping bias by splitting samples, defining the case study statistically, transforming data series into market positions with rules, constructing technical analysis indicators from price and volume data, and categories of rules examined including trends, extremes/transitions, and divergence.
A review of the assumptions behind fundamental, macro, and statistical risk models. Pros and cons of each approach. Introducing adaptive hybrid risk models.
The document discusses nonlinear models for volatility and correlation in financial data. It introduces the autoregressive conditional heteroscedasticity (ARCH) model and generalized ARCH (GARCH) models, which allow the variance of errors to depend on previous values. Specifically, a GARCH(1,1) model is presented where the conditional variance is a function of the lagged squared errors and lagged variance. The document also discusses testing for ARCH effects and some limitations of ARCH models that GARCH addresses.
The document discusses various methods of measuring risk and volatility in investments. It defines key terms like return, risk, standard deviation and volatility. It then explains different models used to measure volatility like EWMA, ARCH, GARCH and VaR. For EWMA, it provides the formula and explains how it is used to estimate volatility. For ARCH and GARCH models, it describes the concepts and formulas for ARCH(1), GARCH(1,1) and how they model conditional heteroskedasticity. Finally, it explains the variance-covariance and Monte Carlo methods to calculate Value at Risk (VaR).
AACIMP 2010 Summer School lecture by Vasyl Gorbachuk. "Applied Mathematics" stream. "Financial Mathematics" course. Part 4.
More info at http://summerschool.ssa.org.ua
The document summarizes a study that uses the Capital Asset Pricing Model (CAPM) to analyze the risk and returns of 5 stocks from 2013-2015. It calculates daily returns, beta, alpha, and the correlation of individual stock returns with market returns. The results show most stocks had a slight negative excess return and negative Sharpe ratio, indicating average risk-adjusted performance. Betas were all statistically significant, with GE closest to the market. R-squared values ranged from 20-48%, explaining some but not all variation in returns. The analysis supports that CAPM provides useful but imperfect insights into the relationship between a stock's risk and return.
Analyzing The Outperforming Sector In Volatile MarketPawel Gautam
This document provides background information on stock market volatility. It discusses what volatility refers to, factors that can influence volatility like economic conditions, news events, and investor psychology. It also covers different ways to measure and analyze volatility like standard deviation and average true range. The history and evolution of stock exchanges in India is briefly outlined to provide context for analyzing outperforming sectors in the volatile Indian market.
This document summarizes a student paper on the low-volatility anomaly. The paper examines whether low-volatility stocks achieve higher risk-adjusted returns compared to predictions of CAPM and MPT. It reviews literature explaining the anomaly through various behavioral biases. The paper tests the anomaly using 30 S&P 500 stocks over 20 years. Regression analysis finds no significant relationship between past stock volatility and future returns, providing no support for either CAPM or the low-volatility anomaly based on the sample. Statistical tests confirm the results and inability to reject the null hypothesis of no relationship between risk and return.
Capital asset pricing model (capm) evidence from nigeriaAlexander Decker
This document summarizes a research study that tested the predictions of the Capital Asset Pricing Model (CAPM) using stock return data from the Nigerian stock exchange from 2007 to 2010. The study combined individual stocks into portfolios to enhance the precision of estimates. The results did not support CAPM's predictions that higher risk (higher beta) is associated with higher returns. The study also found that the slope of the Security Market Line did not equal the excess market return, further invalidating CAPM predictions for the Nigerian market. The document provides context on CAPM theory and reviews prior empirical studies that have also found poor support for CAPM predictions.
Managers require accurate forecasts to make good decisions. There are three main categories of forecasting approaches: qualitative and judgmental techniques which rely on experience; statistical time-series models which analyze historical data patterns; and explanatory/causal methods which consider factors influencing changes. Some common forecasting techniques include moving averages, exponential smoothing, and trend line analysis, with error metrics like mean absolute deviation used to evaluate accuracy.
A Quantitative Risk Optimization Of Markowitz ModelAmir Kheirollah
This thesis investigates assumptions of the Markowitz model and evaluates alternative measures for risk-adjusted return. It analyzes Swedish large cap stock returns and finds evidence against the normal distribution assumption. The Sharpe ratio is found to be unreliable due to extreme events. Modified Sharpe ratios that incorporate higher moments like skewness and kurtosis provide more stable measures of portfolio performance over time. Monthly returns best replicate future portfolio performance when considering risk and return, as they experience less variation than daily or weekly returns. Incorporating skewness into the model slightly improves performance estimation for future periods relative to the traditional Markowitz approach.
The Black-Scholes-Merton model provides a mathematical formula for estimating the price of call and put options based on certain variables. It assumes stock prices follow a log-normal distribution and uses variables like the current stock price, strike price, risk-free interest rate, time to expiration, and implied volatility to estimate an option's price. While widely used, it relies on assumptions that are not always accurate to real market conditions, such as constant volatility and a log-normal stock price distribution.
This document summarizes a statistical arbitrage strategy that evaluates mean reversion in stock prices over time. It describes the strategy's assumptions that stock prices temporarily diverge from their equilibrium relative to the market before reverting. The experiment uses S&P 500 stock data to calculate daily returns, correlations, betas and residuals over rolling 60-day windows. When residuals exceed +/-2 standard deviations, positions are taken assuming reversion will occur. While backtested returns are appealing, live trading realities like transaction costs and limited share availability would likely reduce profits versus this theoretical analysis.
This document provides an update on capital structure arbitrage strategies. It begins with an overview of Merton's structural model for pricing debt and equity. It then discusses the CreditGrades model, which builds on Merton's framework. The document reviews literature on using structural models in capital structure arbitrage trading strategies, and replicates Yu's 2006 strategy from 2004-2014. It proposes periodically recalibrating the model to match market spreads, finding this improves performance over keeping parameters fixed. In conclusion, structural models can provide a basis for capital structure arbitrage strategies but require adjustments to align implied and market spreads.
Express measurement of market volatility using ergodicity conceptJack Sarkissian
Don't we want to base our trading decisions on current market conditions? Then why should we rely on time averages only because they are simple to comprehend? We can get current market volatility a lot faster by applying the ERGODICITY concept to financial markets. Ensemble averaging allows to measure market volatility quickly, based on only two points in time and is as relevant to volatility measurement as the traditional measures.
This document discusses quantitative approaches to forecasting, including time series analysis and forecasting techniques. It covers the components of a time series, including trends, cycles, seasonality, and irregular components. Specific quantitative forecasting approaches covered include smoothing methods like moving averages, weighted moving averages, and exponential smoothing. Examples are provided to demonstrate how to perform moving averages and exponential smoothing on time series data for sales of headache medicine. The document aims to teach readers how to analyze time series data and select appropriate forecasting techniques.
This document discusses incorporating news analysis into investment processes. It describes how news flows can be used to improve short-term risk assessments and condition risk estimates. Various data vendors that provide news analytics are also mentioned, as well as strategies for exploiting news signals, such as responding differently to "good" and "bad" news. Challenges with news-based strategies include defining events, assessing informational content, and managing holding periods.
This document provides an extensive literature review of studies examining performance persistence in mutual funds. The review summarizes findings from early studies in the 1960s-1980s that used long time periods of 10-15 years and generally found some evidence of performance persistence, especially for inferior performers. However, later studies using shorter time periods found more inconsistent results and that persistence was strongly dependent on the sample and methodology used. The review concludes that while short-term persistence is sometimes found, past performance is not a reliable predictor of future returns due to biases in conventional testing procedures. Results are often sensitive to the specific measures and time periods examined, especially for equity funds.
This document discusses different methods for stress testing portfolios, including historical and hypothetical scenarios. It defines stress testing as quantifying potential extreme adverse outcomes in a portfolio. Historical stress testing replays the impacts of actual historical events on a portfolio, while hypothetical stress testing involves invented scenarios. The document explores two historical and two hypothetical approaches, discussing their advantages and disadvantages. It emphasizes the importance of senior management support and stakeholder involvement in selecting stress testing scenarios.
A review of the assumptions behind fundamental, macro, and statistical risk models. Pros and cons of each approach. Introducing adaptive hybrid risk models.
The document discusses nonlinear models for volatility and correlation in financial data. It introduces the autoregressive conditional heteroscedasticity (ARCH) model and generalized ARCH (GARCH) models, which allow the variance of errors to depend on previous values. Specifically, a GARCH(1,1) model is presented where the conditional variance is a function of the lagged squared errors and lagged variance. The document also discusses testing for ARCH effects and some limitations of ARCH models that GARCH addresses.
The document discusses various methods of measuring risk and volatility in investments. It defines key terms like return, risk, standard deviation and volatility. It then explains different models used to measure volatility like EWMA, ARCH, GARCH and VaR. For EWMA, it provides the formula and explains how it is used to estimate volatility. For ARCH and GARCH models, it describes the concepts and formulas for ARCH(1), GARCH(1,1) and how they model conditional heteroskedasticity. Finally, it explains the variance-covariance and Monte Carlo methods to calculate Value at Risk (VaR).
AACIMP 2010 Summer School lecture by Vasyl Gorbachuk. "Applied Mathematics" stream. "Financial Mathematics" course. Part 4.
More info at http://summerschool.ssa.org.ua
The document summarizes a study that uses the Capital Asset Pricing Model (CAPM) to analyze the risk and returns of 5 stocks from 2013-2015. It calculates daily returns, beta, alpha, and the correlation of individual stock returns with market returns. The results show most stocks had a slight negative excess return and negative Sharpe ratio, indicating average risk-adjusted performance. Betas were all statistically significant, with GE closest to the market. R-squared values ranged from 20-48%, explaining some but not all variation in returns. The analysis supports that CAPM provides useful but imperfect insights into the relationship between a stock's risk and return.
Analyzing The Outperforming Sector In Volatile MarketPawel Gautam
This document provides background information on stock market volatility. It discusses what volatility refers to, factors that can influence volatility like economic conditions, news events, and investor psychology. It also covers different ways to measure and analyze volatility like standard deviation and average true range. The history and evolution of stock exchanges in India is briefly outlined to provide context for analyzing outperforming sectors in the volatile Indian market.
This document summarizes a student paper on the low-volatility anomaly. The paper examines whether low-volatility stocks achieve higher risk-adjusted returns compared to predictions of CAPM and MPT. It reviews literature explaining the anomaly through various behavioral biases. The paper tests the anomaly using 30 S&P 500 stocks over 20 years. Regression analysis finds no significant relationship between past stock volatility and future returns, providing no support for either CAPM or the low-volatility anomaly based on the sample. Statistical tests confirm the results and inability to reject the null hypothesis of no relationship between risk and return.
Capital asset pricing model (capm) evidence from nigeriaAlexander Decker
This document summarizes a research study that tested the predictions of the Capital Asset Pricing Model (CAPM) using stock return data from the Nigerian stock exchange from 2007 to 2010. The study combined individual stocks into portfolios to enhance the precision of estimates. The results did not support CAPM's predictions that higher risk (higher beta) is associated with higher returns. The study also found that the slope of the Security Market Line did not equal the excess market return, further invalidating CAPM predictions for the Nigerian market. The document provides context on CAPM theory and reviews prior empirical studies that have also found poor support for CAPM predictions.
Managers require accurate forecasts to make good decisions. There are three main categories of forecasting approaches: qualitative and judgmental techniques which rely on experience; statistical time-series models which analyze historical data patterns; and explanatory/causal methods which consider factors influencing changes. Some common forecasting techniques include moving averages, exponential smoothing, and trend line analysis, with error metrics like mean absolute deviation used to evaluate accuracy.
A Quantitative Risk Optimization Of Markowitz ModelAmir Kheirollah
This thesis investigates assumptions of the Markowitz model and evaluates alternative measures for risk-adjusted return. It analyzes Swedish large cap stock returns and finds evidence against the normal distribution assumption. The Sharpe ratio is found to be unreliable due to extreme events. Modified Sharpe ratios that incorporate higher moments like skewness and kurtosis provide more stable measures of portfolio performance over time. Monthly returns best replicate future portfolio performance when considering risk and return, as they experience less variation than daily or weekly returns. Incorporating skewness into the model slightly improves performance estimation for future periods relative to the traditional Markowitz approach.
The Black-Scholes-Merton model provides a mathematical formula for estimating the price of call and put options based on certain variables. It assumes stock prices follow a log-normal distribution and uses variables like the current stock price, strike price, risk-free interest rate, time to expiration, and implied volatility to estimate an option's price. While widely used, it relies on assumptions that are not always accurate to real market conditions, such as constant volatility and a log-normal stock price distribution.
This document summarizes a statistical arbitrage strategy that evaluates mean reversion in stock prices over time. It describes the strategy's assumptions that stock prices temporarily diverge from their equilibrium relative to the market before reverting. The experiment uses S&P 500 stock data to calculate daily returns, correlations, betas and residuals over rolling 60-day windows. When residuals exceed +/-2 standard deviations, positions are taken assuming reversion will occur. While backtested returns are appealing, live trading realities like transaction costs and limited share availability would likely reduce profits versus this theoretical analysis.
This document provides an update on capital structure arbitrage strategies. It begins with an overview of Merton's structural model for pricing debt and equity. It then discusses the CreditGrades model, which builds on Merton's framework. The document reviews literature on using structural models in capital structure arbitrage trading strategies, and replicates Yu's 2006 strategy from 2004-2014. It proposes periodically recalibrating the model to match market spreads, finding this improves performance over keeping parameters fixed. In conclusion, structural models can provide a basis for capital structure arbitrage strategies but require adjustments to align implied and market spreads.
Express measurement of market volatility using ergodicity conceptJack Sarkissian
Don't we want to base our trading decisions on current market conditions? Then why should we rely on time averages only because they are simple to comprehend? We can get current market volatility a lot faster by applying the ERGODICITY concept to financial markets. Ensemble averaging allows to measure market volatility quickly, based on only two points in time and is as relevant to volatility measurement as the traditional measures.
This document discusses quantitative approaches to forecasting, including time series analysis and forecasting techniques. It covers the components of a time series, including trends, cycles, seasonality, and irregular components. Specific quantitative forecasting approaches covered include smoothing methods like moving averages, weighted moving averages, and exponential smoothing. Examples are provided to demonstrate how to perform moving averages and exponential smoothing on time series data for sales of headache medicine. The document aims to teach readers how to analyze time series data and select appropriate forecasting techniques.
This document discusses incorporating news analysis into investment processes. It describes how news flows can be used to improve short-term risk assessments and condition risk estimates. Various data vendors that provide news analytics are also mentioned, as well as strategies for exploiting news signals, such as responding differently to "good" and "bad" news. Challenges with news-based strategies include defining events, assessing informational content, and managing holding periods.
This document provides an extensive literature review of studies examining performance persistence in mutual funds. The review summarizes findings from early studies in the 1960s-1980s that used long time periods of 10-15 years and generally found some evidence of performance persistence, especially for inferior performers. However, later studies using shorter time periods found more inconsistent results and that persistence was strongly dependent on the sample and methodology used. The review concludes that while short-term persistence is sometimes found, past performance is not a reliable predictor of future returns due to biases in conventional testing procedures. Results are often sensitive to the specific measures and time periods examined, especially for equity funds.
This document discusses different methods for stress testing portfolios, including historical and hypothetical scenarios. It defines stress testing as quantifying potential extreme adverse outcomes in a portfolio. Historical stress testing replays the impacts of actual historical events on a portfolio, while hypothetical stress testing involves invented scenarios. The document explores two historical and two hypothetical approaches, discussing their advantages and disadvantages. It emphasizes the importance of senior management support and stakeholder involvement in selecting stress testing scenarios.
Similar to FA_Unit 4 (Time Series Analysis E-views).pdf (20)
B2B payments are rapidly changing. Find out the 5 key questions you need to be asking yourself to be sure you are mastering B2B payments today. Learn more at www.BlueSnap.com.
Navigating the world of forex trading can be challenging, especially for beginners. To help you make an informed decision, we have comprehensively compared the best forex brokers in India for 2024. This article, reviewed by Top Forex Brokers Review, will cover featured award winners, the best forex brokers, featured offers, the best copy trading platforms, the best forex brokers for beginners, the best MetaTrader brokers, and recently updated reviews. We will focus on FP Markets, Black Bull, EightCap, IC Markets, and Octa.
Discover timeless style with the 2022 Vintage Roman Numerals Men's Ring. Crafted from premium stainless steel, this 6mm wide ring embodies elegance and durability. Perfect as a gift, it seamlessly blends classic Roman numeral detailing with modern sophistication, making it an ideal accessory for any occasion.
https://rb.gy/usj1a2
The Genesis of BriansClub.cm Famous Dark WEb PlatformSabaaSudozai
BriansClub.cm, a famous platform on the dark web, has become one of the most infamous carding marketplaces, specializing in the sale of stolen credit card data.
The 10 Most Influential Leaders Guiding Corporate Evolution, 2024.pdfthesiliconleaders
In the recent edition, The 10 Most Influential Leaders Guiding Corporate Evolution, 2024, The Silicon Leaders magazine gladly features Dejan Štancer, President of the Global Chamber of Business Leaders (GCBL), along with other leaders.
[To download this presentation, visit:
https://www.oeconsulting.com.sg/training-presentations]
This presentation is a curated compilation of PowerPoint diagrams and templates designed to illustrate 20 different digital transformation frameworks and models. These frameworks are based on recent industry trends and best practices, ensuring that the content remains relevant and up-to-date.
Key highlights include Microsoft's Digital Transformation Framework, which focuses on driving innovation and efficiency, and McKinsey's Ten Guiding Principles, which provide strategic insights for successful digital transformation. Additionally, Forrester's framework emphasizes enhancing customer experiences and modernizing IT infrastructure, while IDC's MaturityScape helps assess and develop organizational digital maturity. MIT's framework explores cutting-edge strategies for achieving digital success.
These materials are perfect for enhancing your business or classroom presentations, offering visual aids to supplement your insights. Please note that while comprehensive, these slides are intended as supplementary resources and may not be complete for standalone instructional purposes.
Frameworks/Models included:
Microsoft’s Digital Transformation Framework
McKinsey’s Ten Guiding Principles of Digital Transformation
Forrester’s Digital Transformation Framework
IDC’s Digital Transformation MaturityScape
MIT’s Digital Transformation Framework
Gartner’s Digital Transformation Framework
Accenture’s Digital Strategy & Enterprise Frameworks
Deloitte’s Digital Industrial Transformation Framework
Capgemini’s Digital Transformation Framework
PwC’s Digital Transformation Framework
Cisco’s Digital Transformation Framework
Cognizant’s Digital Transformation Framework
DXC Technology’s Digital Transformation Framework
The BCG Strategy Palette
McKinsey’s Digital Transformation Framework
Digital Transformation Compass
Four Levels of Digital Maturity
Design Thinking Framework
Business Model Canvas
Customer Journey Map
The APCO Geopolitical Radar - Q3 2024 The Global Operating Environment for Bu...APCO
The Radar reflects input from APCO’s teams located around the world. It distils a host of interconnected events and trends into insights to inform operational and strategic decisions. Issues covered in this edition include:
How to Implement a Strategy: Transform Your Strategy with BSC Designer's Comp...Aleksey Savkin
The Strategy Implementation System offers a structured approach to translating stakeholder needs into actionable strategies using high-level and low-level scorecards. It involves stakeholder analysis, strategy decomposition, adoption of strategic frameworks like Balanced Scorecard or OKR, and alignment of goals, initiatives, and KPIs.
Key Components:
- Stakeholder Analysis
- Strategy Decomposition
- Adoption of Business Frameworks
- Goal Setting
- Initiatives and Action Plans
- KPIs and Performance Metrics
- Learning and Adaptation
- Alignment and Cascading of Scorecards
Benefits:
- Systematic strategy formulation and execution.
- Framework flexibility and automation.
- Enhanced alignment and strategic focus across the organization.
Structural Design Process: Step-by-Step Guide for BuildingsChandresh Chudasama
The structural design process is explained: Follow our step-by-step guide to understand building design intricacies and ensure structural integrity. Learn how to build wonderful buildings with the help of our detailed information. Learn how to create structures with durability and reliability and also gain insights on ways of managing structures.
How MJ Global Leads the Packaging Industry.pdfMJ Global
MJ Global's success in staying ahead of the curve in the packaging industry is a testament to its dedication to innovation, sustainability, and customer-centricity. By embracing technological advancements, leading in eco-friendly solutions, collaborating with industry leaders, and adapting to evolving consumer preferences, MJ Global continues to set new standards in the packaging sector.
Digital Marketing with a Focus on Sustainabilitysssourabhsharma
Digital Marketing best practices including influencer marketing, content creators, and omnichannel marketing for Sustainable Brands at the Sustainable Cosmetics Summit 2024 in New York
Easily Verify Compliance and Security with Binance KYCAny kyc Account
Use our simple KYC verification guide to make sure your Binance account is safe and compliant. Discover the fundamentals, appreciate the significance of KYC, and trade on one of the biggest cryptocurrency exchanges with confidence.
At Techbox Square, in Singapore, we're not just creative web designers and developers, we're the driving force behind your brand identity. Contact us today.
Unveiling the Dynamic Personalities, Key Dates, and Horoscope Insights: Gemin...my Pandit
Explore the fascinating world of the Gemini Zodiac Sign. Discover the unique personality traits, key dates, and horoscope insights of Gemini individuals. Learn how their sociable, communicative nature and boundless curiosity make them the dynamic explorers of the zodiac. Dive into the duality of the Gemini sign and understand their intellectual and adventurous spirit.
2. Background to Time Series
Analysis
MBAE 0049, Financial Analytics (Instructor - Dr. Ankit Saxena)
GLA University, Mathura
2
3. Volatility vs.
Risk
• Volatility refers to the upward and downward trend
swings in market indices / interest rates, on which an
investor will have little or no control at all.
• Whereas Risk, on the other hand, is a personal matter
like possibility or chance of an injury, loss or hazard, or
how much financial uncertainty an investor can
tolerate.
5. Volatility vs.
Risk
• A volatility in stock market can be measured in
multiple ways. Standard deviation indicates how
much a stock market index varies from its
average, both on the upside and the downside.
• But in case of risk there is no such measure to
calculate.
• In financial terminology risk refers to the
potential permanent loss of money. Technically
risk tolerance means different things to
different people.
6. Is volatility good? 6
• Volatility finds its extensive application in the domain of equity investing.
• The risk-averse usually opts for a diversified equity portfolio as against the risk-
seeker who prefers a portfolio inclined towards small caps.
• In order to maximize returns, volatility can be extremely helpful. But how one takes
it makes all the difference.
• Closely following market valuations and instances of high volatility may help you to
make wise decisions as regards diversification, asset allocation, and rebalancing.
• Volatility does not imply risk of loss. Volatility simply refers to the price fluctuation.
MBAE 0049, Financial Analytics (Instructor - Dr. Ankit Saxena)
8. 1. Historic
Volatility
MBAE 0049, Financial Analytics (Instructor - Dr. Ankit Saxena) 8
Volatility in its most basic form represents daily
changes in stock prices.
We call this historical volatility (or historic
volatility) and it is the starting point for
understanding volatility in the greater sense.
Historic volatility is the standard deviation of
the change in price of a stock or other financial
instrument relative to its historic price over a
period of time.
9. 2. Implied Volatility
The options market is a bid and offer system in which buyers and sellers come together in an
auction environment to actuate price discovery and execute trades. These prices are quoted in
dollars and cents.
From these prices, knowing all of the other Black-Scholes variables and using the Black-Scholes
formula, we can calculate the volatility, which is implicit from a traded price or the bid and
offer.
This is referred to as the option's implied volatility.
Whereas historic volatility is static for a fixed given period of time, please note that implied
volatility will vary for a stock based on different options strike prices. This is referred to as the
volatility skew.
9
MBAE 0049, Financial Analytics (Instructor - Dr. Ankit Saxena)
10. 3. Volatility Indices 10
• This concept is taken one step further. For many indices, a volatility index has been
created and is commonly quoted in the financial media. The three most common
ones:
• S&P 500 Volatility Index (VIX)
• S&P 100 Volatility Index (VXO)
• Nasdaq 100 Volatility Index (VXN)
• These volatility indices are a weighted average of the implied volatilities for several
series of options (puts and calls). Many market participants and observers will use
these indices as a gauge of market sentiment.
MBAE 0049, Financial Analytics (Instructor - Dr. Ankit Saxena)
11. 4. Intraday Volatility 11
• Finally, we have intraday volatility.
• This represents the market swings during the course of a trading day and
is the most noticeable and readily available definition of volatility.
• A common mistake is equating intraday volatility with the implied
volatility index. Both of these forms of volatility are not interchangeable,
but do carry their own importance in ascertaining investor sentiment and
expectations.
MBAE 0049, Financial Analytics (Instructor - Dr. Ankit Saxena)
12. Summary
• the movement of an asset or asset class
relative to itself;
Historical
Volatility
• volatility that is embedded in an option
price;
Implied
Volatility
• a weighted average of implied volatilities
for options on a particular index;
Volatility Index
• the price movements in a stock or index
on or during a given trading day.
Intraday
Volatility
12
MBAE 0049, Financial Analytics (Instructor - Dr. Ankit Saxena)
14. Statistical Stationarity
MBAE 0049, Financial Analytics (Instructor - Dr. Ankit Saxena) 14
A stationary time series is one
whose statistical properties such
as mean, variance,
autocorrelation, etc. are all
constant over time.
Most statistical forecasting
methods are based on the
assumption that the time series
can be rendered approximately
stationary (i.e., "stationarized")
through the use of mathematical
transformations.
A stationarized series is relatively
easy to predict: you simply
predict that its statistical
properties will be the same in the
future as they have been in the
past!
The predictions for the
stationarized series can then be
"untransformed," by reversing
whatever mathematical
transformations were previously
used, to obtain predictions for
the original series.
Thus, finding the sequence of
transformations needed to
stationarize a time series often
provides important clues in the
search for an appropriate
forecasting model.
15. Unit root
(Dickey-Fuller)
and Stationarity
tests on time
series
• A time series Y_t (t=1,2...) is said to
be stationary (in the weak sense) if
its statistical properties do not vary
with time (expectation, variance,
autocorrelation).
15
16. Stationarity Tests
Stationarity tests allow verifying whether a series is stationary or
not.
There are two different
approaches:
Stationarity Tests such as the KPSS test that consider as
•null hypothesis H0 that the series is stationary, and
Unit Root Tests, such as the Dickey-Fuller test and its augmented
version, the augmented Dickey-Fuller test (ADF), or the Phillips-Perron
test (PP),
•for which the null hypothesis is on the contrary that the series
possesses a unit root and hence is not stationary.
16
MBAE 0049, Financial Analytics (Instructor - Dr. Ankit Saxena)
17. Interpreting the results of an ADF test, a
PP test and a KPSS test (example on
stationary series)
MBAE
0049,
Financial
Analytics
(Instructor
-
Dr.
Ankit
Saxena)
17
18. • We can see that the three
tests agree for these series. For
the first series, both the ADF
and the PP rejects the null
hypothesis that the series is
autocorrelated with (r=1) and
retains the alternative
hypothesis that it is stationary,
and the KPSS test keeps the null
hypothesis that the series is
stationary.
MBAE 0049, Financial Analytics (Instructor - Dr. Ankit Saxena) 18
19. • For the second series, the p-
values are not as low (ADF test)
or high (KPSS test) as it was
with the first sample but the
same conclusions are retained.
MBAE 0049, Financial Analytics (Instructor - Dr. Ankit Saxena) 19
20. Interpreting the results of an ADF
test, a PP test and a KPSS test
(example on non-stationary series)
MBAE 0049, Financial Analytics (Instructor - Dr. Ankit Saxena) 20
21. Performing Unit
Root Tests In
EViews
• To begin,
• double click on the series
name to open the series
window,
• and
• choose View/Unit Root
Test…
MBAE 0049, Financial Analytics (Instructor - Dr. Ankit Saxena) 21
22. • For the time series in
columns F (see results on sheet
Dickey-Fuller|Phillips-Perron3),
we change the alternative
option to explosive for the ADF
test and to trend for the KPSS
test. The KPSS tests lead to the
conclusion that the series is not
stationary while both unit root
tests reject the hypothesis of a
unit root in the data generation
mechanism.
MBAE 0049, Financial Analytics (Instructor - Dr. Ankit Saxena) 22
24. Correlogram
• This view displays the
autocorrelation and partial
autocorrelation functions up to the
specified order of lags.
• These functions characterize the
pattern of temporal dependence in
the series and typically make sense
only for time series data.
• When you
select View/Correlogram… the Cor
relogram Specification dialog box
appears.
MBAE 0049, Financial Analytics (Instructor - Dr. Ankit Saxena) 24
26. Johansen Co-integration Test
Johansen’s test is a way to determine if three or more time series are
cointegrated. More specifically, it assesses the validity of a cointegrating
relationship, using a maximum likelihood estimates (MLE) approach. It is also
used to find the number of relationships and as a tool to estimating those
relationships (Wee & Tan, 1997).
There are two types of Johansen’s test: one uses trace (from linear algebra), the
other a maximum eigenvalue approach (an eigenvalue is a special scalar; When
you multiply a matrix by a vector and get the same vector as an answer, along
with a new scalar, the scalar is called an eigenvalue).
26
MBAE 0049, Financial Analytics (Instructor - Dr. Ankit Saxena)
27. MBAE 0049, Financial Analytics (Instructor - Dr.
Ankit Saxena)
27
Causality granger test
- A test to check
interdependency
28. Granger Causality Test
MBAE 0049, Financial Analytics (Instructor - Dr. Ankit Saxena) 28
The Granger causality test is a statistical hypothesis test for determining whether
one time series is useful in forecasting another
Ordinarily, regressions reflect "mere" correlations, but Clive Granger argued that
causality in economics could be reflected by measuring the ability of predicting the
future values of a time series using past values of another time series.
Since the question of "true causality" is deeply philosophical, econometricians
assert that the Granger test finds only "predictive causality"
29. Explanation
• Granger defined the causality relationship
based on two principles
• The cause happens prior to its effect.
• The cause has unique information
about the future values of its effect.
MBAE
0049,
Financial
Analytics
(Instructor
-
Dr.
Ankit
Saxena)
29
30. Granger Causality Test: Y = f(X)
Model Res.DF Diff. DF F p-value
Complete model 2
Reduced model 3 -1 0.318 0.629
Granger Causality Test: X = f(Y)
Model Res.DF Diff. DF F p-value
Complete model 2
Reduced model 3 -1 1.280 0.375
30
MBAE 0049, Financial Analytics (Instructor - Dr. Ankit Saxena)
32. ARIMA
• The term Arima and Box-Jenkin are use
interchangeably
• The purpose of ARIMA modeling is to establish a
relationship between the present value of a time
series and its past values so that forecasts can be
made on the basis of the past values alone.
• ARIMA stands for Autoregressive- Integrated-
Moving Average. The letter "I” (Integrated) indicates
that the modeling time series has been transformed
into a stationary time series.
• ARIMA represents three different types of models: It
can be an AR (autoregressive) model, or a MA
(moving average) model, or an ARMA which includes
both AR and MA terms.
32
33. Modeling Process
MBAE 0049, Financial Analytics (Instructor - Dr. Ankit Saxena) 33
MODEL
IDENTIFICATION
MODEL
ESTIMATION
DIAGNOSTIC
CHECKING
FORECASTING
35. Economic Forecasting
Based on past and current information, the
objective of forecasting is to provide
quantitative estimate(s) of the likelihood of
the future course of the object of interest
(e.g. personal consumption expenditure).
We develop econometric models and use
one or more methods of forecasting its
future course.
35
MBAE 0049, Financial Analytics (Instructor - Dr. Ankit Saxena)
36. Economic Forecasts
Economics [GDP, Unemployment, Consumption, Investment, Interest Rates]
Financial Asset Management [Asset Returns, Exchange Rates and Commodity Prices]
Financial Risk Management [Asset Return Volatility]
Marketing [Response of Sales to Different marketing Schemes]
Business and Government [Revenue Forecasts]
Crisis Management [Probabilities of default, currency devaluations]
36
MBAE 0049, Financial Analytics (Instructor - Dr. Ankit Saxena)
37. Point &
Interval
Forecasts:
MBAE 0049, Financial Analytics (Instructor - Dr. Ankit Saxena) 37
In point forecasts we provide a
single value for each forecast
period.
In interval forecasts we obtain a
range, or an interval, that will
include the realized value with
some probability.
The interval forecast provides a
margin of uncertainty about the
point forecast.
38. Volatility
Clustering
• Financial time series, such as stock prices,
interest rates, foreign exchange rates
exhibit volatility clustering.
• Period of Turbulence – Price show
wide swings
• Period of Tranquility – There is a
relative calm
38
39. Volatility
Clustering
• Various sources of news and other
economic events may have an impact on
the time series pattern of asset prices
• News can lead to various
interpretations,
• economic events may occur
• Normally, large positive and large
negative observations in financial time
series appears in clusters
39
40. Real and
Financial
Effect
40
Such swings in prices have serious
effects
Investors are concerned about the
• Rate of return on their investment
• Risk of investment and variability / volatility of
risk
It is important to measure asset returns
volatility
41. Measuring
Volatility
41
A simple measure of asset return volatility
is its variance over time
Variance by itself does not capture
volatility clustering
• Subtract the mean value from individual value,
square the difference and divide it by number of
observations
• A measure of unconditional variance
• A single number of a given sample
• Does not take into account the past history (time-
series volatility)
43. The ARCH Model
A measure that takes into
account the past history
(conditional / time varying
volatility)
In Time Series, involving
asset returns, such as
returns on stocks or
foreign exchange, we
observe auto-correlated
heteroscedasticity
44. Heteroscedasticity
The basic version of the least
squares model assumes that the
expected value of all error terms,
when squared, is the same at any
given point. This assumption is
called homoskedasticity.
Data in which the variances of the
error terms are not equal, in
which the error terms may
reasonably be expected to be
larger for some points or ranges of
the data than for others, are said
to suffer from heteroskedasticity.
45. Auto-correlated
Heteroscedasticity
• Heteroscedasticity, or unequal variance, in
cross section (multiple company, industry,
economy) data because of heterogeneity
among individual cross-section units.
• In time series data, we usually observe auto-
correlation
• In financial data, we observe Auto-correlated
Heteroscedasticity (Heteroscedasticity
observed over different periods is auto-
correlated)
• In Literature, this phenomenon is called ARCH
Effect
45
46. ARCH IN
EViews
• To estimate an ARCH or
GARCH model,
• Open the equation
specification dialog by
selecting Quick/Estimate
Equation…, by
selecting Object/New
Object.../Equation….
• Select ARCH from the
method dropdown menu at
the bottom of the dialog.
MBAE 0049, Financial Analytics (Instructor - Dr. Ankit Saxena) 46
47. Interpretation
ARCH Test
• Null Hypothesis: There is no
Arch Effect
• Alternate Hypothesis: There
is Arch Effect.
• In above illustration, Null
Hypothesis is rejected.
MBAE 0049, Financial Analytics (Instructor - Dr. Ankit Saxena) 47