The Black-Litterman model combines market equilibrium expected returns with views of an investor to estimate expected returns. It uses a Bayesian approach to find a new distribution given both market information and investor views. The methodology involves estimating market variables, specifying investor views as normal distributions, and using an optimization equation to minimize the distance between parameters and both market and view information.
OLS estimates regression parameters by minimizing the sum of squared errors to find coefficients that make the model predictions as close to the actual values as possible. It provides the best linear unbiased estimates under certain assumptions. Variance (VAR) and volatility are estimated using historical data and different distributional assumptions, including normal, mixture of normals, non-parametric, and
The document discusses the derivation and testing of the Capital Asset Pricing Model (CAPM). It begins by restating three key equations related to the CAPM. It then describes the assumptions and derivation of the CAPM, noting that the key insight is that the market portfolio is efficient. The document outlines how the CAPM makes testable predictions about asset expected returns and betas. It discusses additional assumptions required to test the CAPM using regression analysis. Specifically, it explains the Fama-MacBeth and Gibbons-Ross-Shanken (GRS) approaches to estimating the security market line implied by the CAPM using cross-sectional and time-series regressions respectively.
This document introduces the concept of "ultimate profitability" to evaluate the effectiveness of market research. Ultimate profitability measures the maximum possible annual return from perfectly timing entry and exit from a market based on its price extremes. The document outlines a methodology to calculate ultimate profitability for different markets and indexes based on varying the scale of price movements considered. It presents an example calculation of ultimate profitability for the Russian equity index RUIX under different scales and finds an inverse power law relationship between profitability and scale.
This document discusses Value at Risk (VaR) and how it can be used by client advisors, sales/brokerage teams, and senior management to assess portfolio risks. VaR measures the maximum potential loss of a portfolio over a time period, given a probability. It allows risks across different asset types to be measured together. The document outlines how VaR is calculated using historical volatility and correlation data to project a range of possible future portfolio values. It also discusses how options are incorporated into VaR using measures like delta, gamma, and theta to account for non-normal return distributions. The overall aim is to inform readers about risk measurement and how VaR can help mitigate risks for clients.
This document summarizes a study that tests whether securities with different estimated betas, a measure of risk, systematically experience different average returns. The study sorts securities into portfolios based on their estimated betas and then tests whether the average returns of the portfolios are statistically different. If portfolios with different betas have similar average returns, this would suggest betas do not reliably measure risk that is priced in the market, challenging a key implication of the capital asset pricing model (CAPM). The results show portfolios with different estimated betas have statistically indistinguishable average returns. This provides evidence that estimated betas do not reliably measure risk that is priced in the market, calling into question the empirical validity of the CAPM.
- The document discusses using Markowitz's modern portfolio theory and the mean-variance approach to construct an optimal portfolio from two stocks, R1 BAG and R2 ABF, with the goal of minimizing risk.
- It analyzes the stock performance and portfolio returns over two periods, and finds that a weighting of 70.8% in R2 ABF provides the minimum risk portfolio.
- It also discusses using the single-index model as an alternative to Markowitz's approach, and calculates the beta, alpha, and expected returns for the two stocks based on market index returns.
Arbitrage pricing theory & Efficient market hypothesisHari Ram
Arbitrage pricing theory (APT) is a multi-factor asset pricing model based on the idea that an asset's returns can be predicted using the linear relationship between the asset's expected return and a number of macroeconomic variables that capture systematic risk.
The article re-institutes the investor faith in CAPM model and talks about how CAPM is very closely coupled to the actual investment practices at the ground level. The general criticism of CAPM model that it does not fit empirical asset pricing well cast doubt on the validity of model have been explained.
This report analyzes the Three Stars investment fund. It first examines the fund's return characteristics, finding the fund had a higher mean return but also higher risk than the market index. It then evaluates the fund's performance using several metrics. Sharpe and Treynor ratios found the fund offered greater risk-adjusted returns than the market. The report also conducts market timing analysis and concludes the fund was able to time the market to some degree to maximize returns. Overall, the analysis finds the fund performed well but also carried higher risk than the market.
The document discusses the derivation and testing of the Capital Asset Pricing Model (CAPM). It begins by restating three key equations related to the CAPM. It then describes the assumptions and derivation of the CAPM, noting that the key insight is that the market portfolio is efficient. The document outlines how the CAPM makes testable predictions about asset expected returns and betas. It discusses additional assumptions required to test the CAPM using regression analysis. Specifically, it explains the Fama-MacBeth and Gibbons-Ross-Shanken (GRS) approaches to estimating the security market line implied by the CAPM using cross-sectional and time-series regressions respectively.
This document introduces the concept of "ultimate profitability" to evaluate the effectiveness of market research. Ultimate profitability measures the maximum possible annual return from perfectly timing entry and exit from a market based on its price extremes. The document outlines a methodology to calculate ultimate profitability for different markets and indexes based on varying the scale of price movements considered. It presents an example calculation of ultimate profitability for the Russian equity index RUIX under different scales and finds an inverse power law relationship between profitability and scale.
This document discusses Value at Risk (VaR) and how it can be used by client advisors, sales/brokerage teams, and senior management to assess portfolio risks. VaR measures the maximum potential loss of a portfolio over a time period, given a probability. It allows risks across different asset types to be measured together. The document outlines how VaR is calculated using historical volatility and correlation data to project a range of possible future portfolio values. It also discusses how options are incorporated into VaR using measures like delta, gamma, and theta to account for non-normal return distributions. The overall aim is to inform readers about risk measurement and how VaR can help mitigate risks for clients.
This document summarizes a study that tests whether securities with different estimated betas, a measure of risk, systematically experience different average returns. The study sorts securities into portfolios based on their estimated betas and then tests whether the average returns of the portfolios are statistically different. If portfolios with different betas have similar average returns, this would suggest betas do not reliably measure risk that is priced in the market, challenging a key implication of the capital asset pricing model (CAPM). The results show portfolios with different estimated betas have statistically indistinguishable average returns. This provides evidence that estimated betas do not reliably measure risk that is priced in the market, calling into question the empirical validity of the CAPM.
- The document discusses using Markowitz's modern portfolio theory and the mean-variance approach to construct an optimal portfolio from two stocks, R1 BAG and R2 ABF, with the goal of minimizing risk.
- It analyzes the stock performance and portfolio returns over two periods, and finds that a weighting of 70.8% in R2 ABF provides the minimum risk portfolio.
- It also discusses using the single-index model as an alternative to Markowitz's approach, and calculates the beta, alpha, and expected returns for the two stocks based on market index returns.
Arbitrage pricing theory & Efficient market hypothesisHari Ram
Arbitrage pricing theory (APT) is a multi-factor asset pricing model based on the idea that an asset's returns can be predicted using the linear relationship between the asset's expected return and a number of macroeconomic variables that capture systematic risk.
The article re-institutes the investor faith in CAPM model and talks about how CAPM is very closely coupled to the actual investment practices at the ground level. The general criticism of CAPM model that it does not fit empirical asset pricing well cast doubt on the validity of model have been explained.
This report analyzes the Three Stars investment fund. It first examines the fund's return characteristics, finding the fund had a higher mean return but also higher risk than the market index. It then evaluates the fund's performance using several metrics. Sharpe and Treynor ratios found the fund offered greater risk-adjusted returns than the market. The report also conducts market timing analysis and concludes the fund was able to time the market to some degree to maximize returns. Overall, the analysis finds the fund performed well but also carried higher risk than the market.
The Black-Scholes-Merton model provides a mathematical formula for estimating the price of call and put options based on certain variables. It assumes stock prices follow a log-normal distribution and uses variables like the current stock price, strike price, risk-free interest rate, time to expiration, and implied volatility to estimate an option's price. While widely used, it relies on assumptions that are not always accurate to real market conditions, such as constant volatility and a log-normal stock price distribution.
1. The document analyzes value and growth stocks between 1975-2004, comparing their returns and risks. It finds that value stocks generally outperformed growth stocks over this period.
2. A moving average analysis of the value-growth return spread shows it fluctuated between positive and negative returns with no clear pattern, contradicting the theory that value stocks always outperform. The spreads were also small relative to the portfolios' volatility.
3. Regression analyses found the CAPM model did not accurately predict returns. The growth portfolio underperformed predictions by -0.15% annually, while the value portfolio outperformed by 0.14%, contradicting CAPM. The spread portfolio had low correlation to the market, as
This document discusses strategies for hedging risks faced by Muck River Plaza, a shopping center with two major anchor tenants (Best Buy and Barnes & Noble) and smaller tenants. It first evaluates the importance of the anchor tenants and models their credit risk and probability of default using the KMV-Merton model. It then values the lease obligations under different scenarios for the anchor tenants. Finally, it discusses hedging strategies and recommends specific derivatives to hedge risks, including risks from lower sales volumes impacting smaller tenants.
Improving Returns from the Markowitz Model using GA- AnEmpirical Validation o...idescitation
Portfolio optimization is the task of allocating the investors capital among
different assets in such a way that the returns are maximized while at the same time, the
risk is minimized. The traditional model followed for portfolio optimization is the
Markowitz model [1], [2],[3]. Markowitz model, considering the ideal case of linear
constraints, can be solved using quadratic programming, however, in real-life scenario, the
presence of nonlinear constraints such as limits on the number of assets in the portfolio, the
constraints on budgetary allocation to each asset class, transaction costs and limits to the
maximum weightage that can be assigned to each asset in the portfolio etc., this problem
becomes increasingly computationally difficult to solve, ie NP-hard. Hence, soft computing
based approaches seem best suited for solving such a problem. An attempt has been made in
this study to use soft computing technique (specifically, Genetic Algorithms), to overcome
this issue. In this study, Genetic Algorithm (GA) has been used to optimize the parameters
of the Markowitz model such that overall portfolio returns are maximized with the standard
deviation of the returns being minimized at the same time. The proposed system is validated
by testing its ability to generate optimal stock portfolios with high returns and low standard
deviations with the assets drawn from the stocks traded on the Bombay Stock Exchange
(BSE). Results show that the proposed system is able to generate much better portfolios
when compared to the traditional Markowitz model.
Polymetis LLC discusses key statistics used in evaluating hedge fund managers and constructing portfolios: mean, standard deviation, and correlation coefficient. While these are commonly used, hedge fund returns are typically not normally distributed as assumed, exhibiting negative skew and excess kurtosis instead. Using the normality assumption can impact results, though the effect is not always large. Additionally, very long track records may contain multiple samples as strategies evolve over time. Both the mean and standard deviation require sufficient sample sizes to be meaningful.
Optimum Time Series Granularity in the Estimation of Financial BetaQiaochu Geng
- The document examines the optimal time series granularity for estimating financial beta. Beta refers to the sensitivity of a security's returns to changes in the market.
- It estimates beta coefficients for companies in the S&P 500 using daily, monthly, quarterly, and yearly returns data from 1980-2016.
- Results show that monthly beta estimates have a mean and median closest to 1.0, which is considered ideal for portfolio management purposes. Therefore, the research determines that monthly is the optimum granularity for estimating beta.
Case study of a comprehensive risk analysis for an asset managerGateway Partners
The following case study is an excerpt of a comprehensive risk analysis prepared for an asset manager client of Gateway Partners. This client is a medium-sized asset manager with offices in both the U.S. and abroad who needed assistance in both quantifying and fully understanding the risk profile of their multi-billion dollar portfolio. Additional risk concerns of this client include “worst case” risk scenario analysis and the use of derivative instruments to assist in the hedging of their portfolio. While this case study has been used with the permission of our client, specific securities and the amounts they represent in the client portfolio have been changed and reduced to protect the identity of the client. Gateway Partners is proud to present this case study as an example of the risk management services we provide to our clients.
This document discusses measuring the upselling potential of life insurance customers using a stochastic frontier model. It proposes that upselling potential should account for the insurer's selling inefficiency, not just changes to customer demographics. The model is applied to data from 5,000 customers of a life insurer. Key findings include:
- The model estimates the maximum premium each customer could provide (the frontier) and the inefficiency of the insurer's selling efforts (ui).
- Upselling scores are calculated based on how much more each customer could potentially purchase compared to their actual premium.
- The analysis found the insurer could have sold an additional 25% in premiums for over half of its customers by reducing selling inefficiency.
The document discusses case studies from Microstructure Research & Engineering Technologies (MRET), including one about an automated market maker seeking to increase profits. It outlines MRET's methodology for solving quantitative problems through statistical computing. This includes formulating hypotheses, collecting and examining order flow data, constructing a factor dictionary, and using techniques like regression analysis and machine learning. The case study presented applies this methodology to help the market maker optimize its pricing of liquidity and maximize risk-adjusted returns.
The document summarizes a study that uses the Capital Asset Pricing Model (CAPM) to analyze the risk and returns of 5 stocks from 2013-2015. It calculates daily returns, beta, alpha, and the correlation of individual stock returns with market returns. The results show most stocks had a slight negative excess return and negative Sharpe ratio, indicating average risk-adjusted performance. Betas were all statistically significant, with GE closest to the market. R-squared values ranged from 20-48%, explaining some but not all variation in returns. The analysis supports that CAPM provides useful but imperfect insights into the relationship between a stock's risk and return.
1) A managed volatility approach seeks to provide competitive returns compared to a benchmark index while maintaining lower volatility over the long term by constructing a portfolio of stocks with low expected volatility.
2) The document summarizes the results of a simulation of a managed volatility strategy for an EMU portfolio between 1999-2010 which showed an improved Sharpe ratio and higher risk-adjusted returns compared to the benchmark index with over 28% lower volatility.
3) Managed volatility strategies that aim to limit downside risk while maintaining potential upside have become increasingly popular with investors seeking to control risk independently from returns.
Mid caps have historically provided better risk-adjusted returns than small or large caps over 10- and 30-year periods. They have outperformed both asset classes while exposing investors to less risk, as measured by standard deviation and Sharpe ratios. However, mid caps remain an underutilized asset class, receiving only about 5% of investors' assets despite representing approximately 30% of the total market. Their lower levels of research coverage and investor interest may lead to pricing inefficiencies that could be exploited.
Black littleman portfolio optimizationHoang Nguyen
This document provides an overview and application of the Black-Litterman portfolio optimization model. It summarizes the key steps of the Black-Litterman model, which combines an investor's subjective views on expected returns with an implied equilibrium to determine optimal portfolio weights. The document then applies the Black-Litterman model to 10 stocks from the Ho Chi Minh City stock exchange in Vietnam over a one-year period. It finds that Black-Litterman portfolios achieved significantly better return-to-risk performance than the traditional mean-variance approach.
These Lecture series are relating the use R language software, its interface and functions required to evaluate financial risk models. Furthermore, R software applications relating financial market data, measuring risk, modern portfolio theory, risk modeling relating returns generalized hyperbolic and lambda distributions, Value at Risk (VaR) modelling, extreme value methods and models, the class of ARCH models, GARCH risk models and portfolio optimization approaches.
The document presents a model for estimating exposure at default (EAD) for contingent credit lines (CCLs) at the portfolio level. It models each CCL as a portfolio of put options, with the exercise of each put following a Poisson process. The model convolutes the usage distributions of individual obligors, sub-segments, and segments to estimate the portfolio-level EAD distribution. The authors test the model using data from Moody's and find near-Gaussian results. They discuss future work to refine the model and make it more practical for banks to estimate regulatory capital requirements.
This document provides an update to a previous study on the performance of passive and active collar strategies applied to the Powershares QQQ ETF (QQQ). The update extends the analysis period through September 2010. It finds that during market declines like the tech bubble and credit crisis, collar strategies provided downside protection and strong returns compared to a long position in QQQ. However, collars underperformed during strong market climbs. The document also analyzes applying collar strategies to a small cap mutual fund and finds similar beneficial results. It concludes that active collars, which dynamically adjust based on momentum, volatility, and macroeconomic signals, tended to outperform passive collars both in-sample and out-of-sample.
This document summarizes the capital asset pricing model (CAPM). It begins by outlining the logic and key assumptions of the CAPM, including that all investors hold the same market portfolio which must lie on the efficient frontier. It then states that the CAPM predicts the expected return of an asset is determined by its beta, or non-diversifiable risk relative to the market. However, the document notes that empirical tests have found the CAPM performs poorly in applications. It concludes the CAPM's failings indicate applications based on the model are invalid, challenging researchers to develop alternative models.
This document summarizes key concepts from the book "Active Portfolio Management" by Richard C. Grinold and Ronald N. Kahn.
It introduces the foundations of active portfolio management including risk, expected returns, benchmarks, value added, and the information ratio. The information ratio measures the expected level of annual residual return per unit of annual residual risk and defines the opportunities available to the active manager. Higher information ratios indicate greater potential for adding value through active management.
It also discusses concepts like consensus expected returns as defined by the CAPM model, decomposing returns into market, residual and exceptional components, and managing total risk versus focusing on active and residual risk relative to a benchmark. The goal of active management is
Value at Risk (VAR) is a risk management measure used to calculate potential losses over a given time period at a specified confidence level. There are three key elements - the level of loss, time period, and confidence level. For example, there is a 5% chance losses will exceed $20M over 5 days. VAR does not provide information on potential losses above the VAR level. There are three main methodologies used to calculate VAR - historical simulation, variance-covariance, and Monte Carlo simulation. Each has its own strengths and weaknesses in terms of implementation and ability to capture risk.
This document discusses portfolio optimization and different algorithms used to solve portfolio optimization problems. It begins by formulating the unconstrained and constrained portfolio optimization problems. For the unconstrained problem, it uses quadratic programming to generate the efficient frontier. For the constrained problem, it uses mixed integer quadratic programming and heuristic algorithms like genetic algorithm, tabu search and simulated annealing. It compares the results of these different algorithms and concludes some perform better than others in terms of accuracy and time complexity for portfolio optimization problems with constraints.
This document discusses risk management, including interest rate risk, credit risk, market risk, and regulatory frameworks like Basel I, Basel II, and Solvency II. It provides methods for computing interest rate risk, such as using an interest rate gap and duration gap approach. It also discusses expected loss estimates and unexpected loss measures for credit risk, as well as tools for estimating market risk and measures of volatility. Finally, it compares banks and insurers, and how risk management has changed after the financial crisis.
The Black-Scholes-Merton model provides a mathematical formula for estimating the price of call and put options based on certain variables. It assumes stock prices follow a log-normal distribution and uses variables like the current stock price, strike price, risk-free interest rate, time to expiration, and implied volatility to estimate an option's price. While widely used, it relies on assumptions that are not always accurate to real market conditions, such as constant volatility and a log-normal stock price distribution.
1. The document analyzes value and growth stocks between 1975-2004, comparing their returns and risks. It finds that value stocks generally outperformed growth stocks over this period.
2. A moving average analysis of the value-growth return spread shows it fluctuated between positive and negative returns with no clear pattern, contradicting the theory that value stocks always outperform. The spreads were also small relative to the portfolios' volatility.
3. Regression analyses found the CAPM model did not accurately predict returns. The growth portfolio underperformed predictions by -0.15% annually, while the value portfolio outperformed by 0.14%, contradicting CAPM. The spread portfolio had low correlation to the market, as
This document discusses strategies for hedging risks faced by Muck River Plaza, a shopping center with two major anchor tenants (Best Buy and Barnes & Noble) and smaller tenants. It first evaluates the importance of the anchor tenants and models their credit risk and probability of default using the KMV-Merton model. It then values the lease obligations under different scenarios for the anchor tenants. Finally, it discusses hedging strategies and recommends specific derivatives to hedge risks, including risks from lower sales volumes impacting smaller tenants.
Improving Returns from the Markowitz Model using GA- AnEmpirical Validation o...idescitation
Portfolio optimization is the task of allocating the investors capital among
different assets in such a way that the returns are maximized while at the same time, the
risk is minimized. The traditional model followed for portfolio optimization is the
Markowitz model [1], [2],[3]. Markowitz model, considering the ideal case of linear
constraints, can be solved using quadratic programming, however, in real-life scenario, the
presence of nonlinear constraints such as limits on the number of assets in the portfolio, the
constraints on budgetary allocation to each asset class, transaction costs and limits to the
maximum weightage that can be assigned to each asset in the portfolio etc., this problem
becomes increasingly computationally difficult to solve, ie NP-hard. Hence, soft computing
based approaches seem best suited for solving such a problem. An attempt has been made in
this study to use soft computing technique (specifically, Genetic Algorithms), to overcome
this issue. In this study, Genetic Algorithm (GA) has been used to optimize the parameters
of the Markowitz model such that overall portfolio returns are maximized with the standard
deviation of the returns being minimized at the same time. The proposed system is validated
by testing its ability to generate optimal stock portfolios with high returns and low standard
deviations with the assets drawn from the stocks traded on the Bombay Stock Exchange
(BSE). Results show that the proposed system is able to generate much better portfolios
when compared to the traditional Markowitz model.
Polymetis LLC discusses key statistics used in evaluating hedge fund managers and constructing portfolios: mean, standard deviation, and correlation coefficient. While these are commonly used, hedge fund returns are typically not normally distributed as assumed, exhibiting negative skew and excess kurtosis instead. Using the normality assumption can impact results, though the effect is not always large. Additionally, very long track records may contain multiple samples as strategies evolve over time. Both the mean and standard deviation require sufficient sample sizes to be meaningful.
Optimum Time Series Granularity in the Estimation of Financial BetaQiaochu Geng
- The document examines the optimal time series granularity for estimating financial beta. Beta refers to the sensitivity of a security's returns to changes in the market.
- It estimates beta coefficients for companies in the S&P 500 using daily, monthly, quarterly, and yearly returns data from 1980-2016.
- Results show that monthly beta estimates have a mean and median closest to 1.0, which is considered ideal for portfolio management purposes. Therefore, the research determines that monthly is the optimum granularity for estimating beta.
Case study of a comprehensive risk analysis for an asset managerGateway Partners
The following case study is an excerpt of a comprehensive risk analysis prepared for an asset manager client of Gateway Partners. This client is a medium-sized asset manager with offices in both the U.S. and abroad who needed assistance in both quantifying and fully understanding the risk profile of their multi-billion dollar portfolio. Additional risk concerns of this client include “worst case” risk scenario analysis and the use of derivative instruments to assist in the hedging of their portfolio. While this case study has been used with the permission of our client, specific securities and the amounts they represent in the client portfolio have been changed and reduced to protect the identity of the client. Gateway Partners is proud to present this case study as an example of the risk management services we provide to our clients.
This document discusses measuring the upselling potential of life insurance customers using a stochastic frontier model. It proposes that upselling potential should account for the insurer's selling inefficiency, not just changes to customer demographics. The model is applied to data from 5,000 customers of a life insurer. Key findings include:
- The model estimates the maximum premium each customer could provide (the frontier) and the inefficiency of the insurer's selling efforts (ui).
- Upselling scores are calculated based on how much more each customer could potentially purchase compared to their actual premium.
- The analysis found the insurer could have sold an additional 25% in premiums for over half of its customers by reducing selling inefficiency.
The document discusses case studies from Microstructure Research & Engineering Technologies (MRET), including one about an automated market maker seeking to increase profits. It outlines MRET's methodology for solving quantitative problems through statistical computing. This includes formulating hypotheses, collecting and examining order flow data, constructing a factor dictionary, and using techniques like regression analysis and machine learning. The case study presented applies this methodology to help the market maker optimize its pricing of liquidity and maximize risk-adjusted returns.
The document summarizes a study that uses the Capital Asset Pricing Model (CAPM) to analyze the risk and returns of 5 stocks from 2013-2015. It calculates daily returns, beta, alpha, and the correlation of individual stock returns with market returns. The results show most stocks had a slight negative excess return and negative Sharpe ratio, indicating average risk-adjusted performance. Betas were all statistically significant, with GE closest to the market. R-squared values ranged from 20-48%, explaining some but not all variation in returns. The analysis supports that CAPM provides useful but imperfect insights into the relationship between a stock's risk and return.
1) A managed volatility approach seeks to provide competitive returns compared to a benchmark index while maintaining lower volatility over the long term by constructing a portfolio of stocks with low expected volatility.
2) The document summarizes the results of a simulation of a managed volatility strategy for an EMU portfolio between 1999-2010 which showed an improved Sharpe ratio and higher risk-adjusted returns compared to the benchmark index with over 28% lower volatility.
3) Managed volatility strategies that aim to limit downside risk while maintaining potential upside have become increasingly popular with investors seeking to control risk independently from returns.
Mid caps have historically provided better risk-adjusted returns than small or large caps over 10- and 30-year periods. They have outperformed both asset classes while exposing investors to less risk, as measured by standard deviation and Sharpe ratios. However, mid caps remain an underutilized asset class, receiving only about 5% of investors' assets despite representing approximately 30% of the total market. Their lower levels of research coverage and investor interest may lead to pricing inefficiencies that could be exploited.
Black littleman portfolio optimizationHoang Nguyen
This document provides an overview and application of the Black-Litterman portfolio optimization model. It summarizes the key steps of the Black-Litterman model, which combines an investor's subjective views on expected returns with an implied equilibrium to determine optimal portfolio weights. The document then applies the Black-Litterman model to 10 stocks from the Ho Chi Minh City stock exchange in Vietnam over a one-year period. It finds that Black-Litterman portfolios achieved significantly better return-to-risk performance than the traditional mean-variance approach.
These Lecture series are relating the use R language software, its interface and functions required to evaluate financial risk models. Furthermore, R software applications relating financial market data, measuring risk, modern portfolio theory, risk modeling relating returns generalized hyperbolic and lambda distributions, Value at Risk (VaR) modelling, extreme value methods and models, the class of ARCH models, GARCH risk models and portfolio optimization approaches.
The document presents a model for estimating exposure at default (EAD) for contingent credit lines (CCLs) at the portfolio level. It models each CCL as a portfolio of put options, with the exercise of each put following a Poisson process. The model convolutes the usage distributions of individual obligors, sub-segments, and segments to estimate the portfolio-level EAD distribution. The authors test the model using data from Moody's and find near-Gaussian results. They discuss future work to refine the model and make it more practical for banks to estimate regulatory capital requirements.
This document provides an update to a previous study on the performance of passive and active collar strategies applied to the Powershares QQQ ETF (QQQ). The update extends the analysis period through September 2010. It finds that during market declines like the tech bubble and credit crisis, collar strategies provided downside protection and strong returns compared to a long position in QQQ. However, collars underperformed during strong market climbs. The document also analyzes applying collar strategies to a small cap mutual fund and finds similar beneficial results. It concludes that active collars, which dynamically adjust based on momentum, volatility, and macroeconomic signals, tended to outperform passive collars both in-sample and out-of-sample.
This document summarizes the capital asset pricing model (CAPM). It begins by outlining the logic and key assumptions of the CAPM, including that all investors hold the same market portfolio which must lie on the efficient frontier. It then states that the CAPM predicts the expected return of an asset is determined by its beta, or non-diversifiable risk relative to the market. However, the document notes that empirical tests have found the CAPM performs poorly in applications. It concludes the CAPM's failings indicate applications based on the model are invalid, challenging researchers to develop alternative models.
This document summarizes key concepts from the book "Active Portfolio Management" by Richard C. Grinold and Ronald N. Kahn.
It introduces the foundations of active portfolio management including risk, expected returns, benchmarks, value added, and the information ratio. The information ratio measures the expected level of annual residual return per unit of annual residual risk and defines the opportunities available to the active manager. Higher information ratios indicate greater potential for adding value through active management.
It also discusses concepts like consensus expected returns as defined by the CAPM model, decomposing returns into market, residual and exceptional components, and managing total risk versus focusing on active and residual risk relative to a benchmark. The goal of active management is
Value at Risk (VAR) is a risk management measure used to calculate potential losses over a given time period at a specified confidence level. There are three key elements - the level of loss, time period, and confidence level. For example, there is a 5% chance losses will exceed $20M over 5 days. VAR does not provide information on potential losses above the VAR level. There are three main methodologies used to calculate VAR - historical simulation, variance-covariance, and Monte Carlo simulation. Each has its own strengths and weaknesses in terms of implementation and ability to capture risk.
This document discusses portfolio optimization and different algorithms used to solve portfolio optimization problems. It begins by formulating the unconstrained and constrained portfolio optimization problems. For the unconstrained problem, it uses quadratic programming to generate the efficient frontier. For the constrained problem, it uses mixed integer quadratic programming and heuristic algorithms like genetic algorithm, tabu search and simulated annealing. It compares the results of these different algorithms and concludes some perform better than others in terms of accuracy and time complexity for portfolio optimization problems with constraints.
This document discusses risk management, including interest rate risk, credit risk, market risk, and regulatory frameworks like Basel I, Basel II, and Solvency II. It provides methods for computing interest rate risk, such as using an interest rate gap and duration gap approach. It also discusses expected loss estimates and unexpected loss measures for credit risk, as well as tools for estimating market risk and measures of volatility. Finally, it compares banks and insurers, and how risk management has changed after the financial crisis.
This document discusses risk management techniques for interest rate risk, credit risk, and market risk. It provides details on methods to calculate interest rate risk such as using interest rate gaps and duration gaps. It also discusses the Basel Committee framework for interest rate risk management. For credit risk, the document outlines estimating expected loss and unexpected loss. It discusses tools for estimating market risk and different applications of market risk models. The document also covers measuring volatility, limitations of the value-at-risk approach, the Basel Accords, insurance solvency regulations, and changes to risk management after the financial crisis.
Exchanges are centralized places where certain securities, commodities, derivatives, and other financial instruments are traded. In order to facilitate trading among buyers and sellers of these products, exchanges take the central position of being the counterparty to both buyers and the sellers of the product. This is done to remove the possibility of disputes that may arise from the non-performance of the counterparty. The exchange guarantees trades will be honored. This creates credit risk for the exchange attributable to the buyers and the sellers of its products. To address the potential loss due to the credit risk undertaken by exchanges from these buyers and sellers of the exchange traded products, exchanges demand certain margin requirements from their counterparties.
This presentation addresses in detail the issues that are considered for calculation of margin requirements and maintenance.
This document discusses two methods for calculating Value-at-Risk (VaR): 1) Assuming a normal distribution of portfolio returns and using a GARCH model to estimate conditional volatility, and 2) A nonparametric bootstrap method. The normal distribution assumption is appropriate only during calm periods but will underestimate risk during turbulent times. The bootstrap method does not rely on distributional assumptions and better accounts for uncertainty in conditional variance dynamics to provide more accurate VaR estimates. An empirical exercise applies the two methods to the CAC40 index to demonstrate how the normal distribution method fails VaR tests during turbulence while the bootstrap method passes.
A Quantitative Risk Optimization Of Markowitz ModelAmir Kheirollah
This thesis investigates assumptions of the Markowitz model and evaluates alternative measures for risk-adjusted return. It analyzes Swedish large cap stock returns and finds evidence against the normal distribution assumption. The Sharpe ratio is found to be unreliable due to extreme events. Modified Sharpe ratios that incorporate higher moments like skewness and kurtosis provide more stable measures of portfolio performance over time. Monthly returns best replicate future portfolio performance when considering risk and return, as they experience less variation than daily or weekly returns. Incorporating skewness into the model slightly improves performance estimation for future periods relative to the traditional Markowitz approach.
This paper proposes using a "shrinkage" estimator as an alternative to the traditional sample covariance matrix for portfolio optimization. The shrinkage estimator combines the sample covariance matrix with a structured "shrinkage target" using a shrinkage constant to minimize distance from the true covariance matrix. The paper finds this shrinkage estimator significantly increases the realized information ratio of active portfolio managers compared to the sample covariance matrix. An empirical study on historical stock return data confirms the shrinkage method leads to higher ex post information ratios in portfolio optimization. However, the shrinkage target assumes identical pairwise correlations that may not fully reflect market characteristics.
Equlibrium, mutual funds and sharpe ratioLuis Pons
The document discusses concepts related to optimal portfolios, mutual funds, and risk-adjusted performance measures. It defines optimal portfolios, two-fund separation, measures of return including time-weighted and dollar-weighted returns. It also discusses benchmarks, risk-adjusted measures including the Sharpe Ratio, Treynor Ratio, and Jensen's Alpha. It compares the Sharpe and Treynor measures and critiques risk-adjusted performance measures.
This document discusses strategies for hedging risks faced by Muck River Plaza, a shopping center with two major anchor tenants (Best Buy and Barnes & Noble) and smaller tenants. It first evaluates the importance of the anchor tenants and models their credit risk and probability of default using the KMV-Merton model. It then values the lease obligations under different scenarios for the anchor tenants. Finally, it discusses hedging strategies and recommends specific derivatives to hedge risks, including risks from lower sales volumes impacting smaller tenants.
This document provides an overview of financial modeling techniques for equity markets that incorporate higher moments like skew and kurtosis. It discusses commonly used portfolio constraints, downside risk measures, and approaches to optimize portfolios based on higher moments. Specifically, it explores using expansions of utility functions and polynomial goal programming to maximize expected return, skewness and minimize variance, kurtosis. It also notes challenges in accurately estimating higher moments and describes an approach by Malevergne and Sornette to model multivariate distributions based on transforming returns into standard normal variables.
Testing and extending the capital asset pricing modelGabriel Koh
This paper attempts to prove whether the conventional Capital Asset Pricing Model (CAPM) holds with respect to a set of asset returns. Starting with the Fama-Macbeth cross-sectional regression, we prove through the significance of pricing errors that the CAPM does not hold. Hence, we expand the original CAPM by including risk factors and factor-mimicking portfolios built on firm-specific characteristics and test for their significance in the model. Ultimately, by adding significant factors, we find that the model helps to better explain asset returns, but does still not entirely capture pricing errors.
An Introduction to the Black Litterman ModelSimon Long
The document provides an introduction and explanation of the Black-Litterman model for portfolio optimization. It discusses some limitations of the modern portfolio theory approach, including reliance on historical data to estimate future returns. The Black-Litterman model incorporates investors' views to customize portfolios to their needs and beliefs. It involves determining the market's expected returns and covariances, then adjusting for investor opinions to calculate optimal asset weights that minimize portfolio variance.
Counterparty Credit RISK | Evolution of standardised approachGRATeam
In this Article, we have made a focus on the new standard methodology (SA-CCR) for computing the EAD related to Counterparty Credit Risk portfolios. The implementation of a SA-CCR approach will become increasingly important for the Banks given the publication of the finalised Basel III reforms; in which it will require from financial institutions to compute an output floor to compare their level of RWAs between Internal and Standard approaches.
The document discusses the evolution of the standardized approach for determining counterparty exposure at default (EAD) under regulatory capital requirements. It provides context on counterparty credit risk and the need for a standardized EAD methodology. It then summarizes the key aspects of the new standardized approach for measuring counterparty credit risk exposures (SA-CCR), including how it calculates the replacement cost and potential future exposure in a more risk-sensitive manner compared to previous standard approaches. The document aims to concisely outline the main components and calculations of the SA-CCR as defined by the Basel Committee on Banking Supervision.
This document provides a summary of an assignment analyzing Occidental Petroleum Corporation (OXY) stock. It includes 10 questions analyzing OXY's stock returns over time, fitting a Capital Asset Pricing Model and GARCH models to the returns data, forecasting future returns and variances, and assessing residuals. The best fitting model was an ARMA(1,1)-GARCH(1,1) model. Forecasts from this model over a 500 day period showed the conditional mean and variance stabilizing over the long-run horizon.
This document is a project report submitted by Stephen Arthur Bradley that empirically calculates an optimal hedging method. It contains an acknowledgment of sources, table of contents, abstract, and sections on put call parity, volatility modeling using historical and implied methods, the Greeks (delta, gamma, vega), Black-Scholes model assumptions and equations, Heston and GARCH models, and a performance comparison of different hedging methods using these models. Code for delta hedging using Black-Scholes, Heston, and GARCH models is included in the appendices.
1. The document discusses modeling multivariate dependence using copula functions.
2. Copulas allow specifying marginal distributions independently and then modeling their joint dependence structure. This provides more flexibility than models that assume a joint distribution.
3. Key topics covered include copula properties, Sklar's theorem relating copulas to multivariate distributions, common copula types, and using copulas for risk modeling and pricing multi-asset derivatives.
This document discusses numerical methods for pricing financial derivatives. It covers discrete and continuous time frameworks, American and path-dependent options, and Monte Carlo simulation. The key points are:
1) Discrete models compute expected value through backward recursion on a lattice, allowing early exercise of American options. Continuous models generalize Black-Scholes.
2) Path-dependent options like lookbacks require Markovianization by introducing an auxiliary state variable. Lattice methods can be refined non-uniformly using adaptive meshing.
3) Monte Carlo simulation prices derivatives through discretization and sampling, with techniques to reduce variance like control variates.
This document provides an overview of fixed income markets and models. It defines key terms and rates used in fixed income like LIBOR, day conventions, and yield curves. It also describes various fixed income products like FRAs, IRS, futures, and bonds. Finally, it discusses techniques for constructing the yield curve from market data using bootstrapping, as well as short-term interest rate models like Vasicek and Cox-Ingersoll-Ross models. The goal is to equip the reader with tools to understand market dynamics, implement hedging strategies, and price interest rate instruments.
The document discusses several econometric models for analyzing financial time series data. It covers ARMA models for modeling returns and volatility, including how to estimate and improve the models. It also discusses cointegration for analyzing long-run relationships and various multivariate models like vector error correction models and multivariate GARCH models for modeling conditional covariances between variables. Simulation methods like historical simulation, Monte Carlo simulation, and bootstrapping are also summarized for evaluating and simulating econometric models.
This document provides an overview of private equity and initial public offerings (IPOs).
It discusses the characteristics of private equity including venture capital financing for startups and leveraged buyouts (LBOs) for mature companies. It also covers legal structures, compensation models, restrictions to align investor and manager interests, and methods for valuing private companies.
The document then discusses IPO procedures including the rationale for going public, determining an offering price, and differences in approaches between countries.
This document defines accounting and financial reporting. It discusses the aims of accounting to manage and evaluate businesses. Financial accounting provides external information regulated by standards, while management accounting provides internal information. The key financial statements are the balance sheet, income statement, statement of cash flows and notes. Various formats are used to analyze trends, common sizes, liquidity, operations vs. finance, and classify expenses by nature vs function. Ratio analysis assesses performance through growth, solvency, liquidity and profitability ratios. Accounting rules and principles guide financial statement preparation and valuation.
The document provides an analysis of Brevini Power Transmission, including:
1. An overview of the company's history, strategic business units, and industry analysis using Porter's five forces model and SWOT analysis.
2. Expectations for return on invested capital (ROIC) based on assumptions and results from analyzing Brevini and its four main competitors.
3. A discounted cash flow (DCF) valuation of Brevini including estimates of the weighted average cost of capital (WACC), free cash flows, and terminal value to determine the company's current value.
This paper aims to introduce a new pricing algorithm to overcome the log-normality assumption for pricing derivatives. The paper analyzes SPX index options due to their high liquidity. Historical SPX return data from 2000-2012 exhibits excess kurtosis and skewness compared to a normal distribution. The paper is divided into three sections: Section I outlines the new pricing approach and analyzes SPX data, Section II explains the pricing methodology and performs stress tests, and Section III discusses the model's strengths/weaknesses and possibilities for future development.
A toxic combination of 15 years of low growth, and four decades of high inequality, has left Britain poorer and falling behind its peers. Productivity growth is weak and public investment is low, while wages today are no higher than they were before the financial crisis. Britain needs a new economic strategy to lift itself out of stagnation.
Scotland is in many ways a microcosm of this challenge. It has become a hub for creative industries, is home to several world-class universities and a thriving community of businesses – strengths that need to be harness and leveraged. But it also has high levels of deprivation, with homelessness reaching a record high and nearly half a million people living in very deep poverty last year. Scotland won’t be truly thriving unless it finds ways to ensure that all its inhabitants benefit from growth and investment. This is the central challenge facing policy makers both in Holyrood and Westminster.
What should a new national economic strategy for Scotland include? What would the pursuit of stronger economic growth mean for local, national and UK-wide policy makers? How will economic change affect the jobs we do, the places we live and the businesses we work for? And what are the prospects for cities like Glasgow, and nations like Scotland, in rising to these challenges?
University of North Carolina at Charlotte degree offer diploma Transcripttscdzuip
办理美国UNCC毕业证书制作北卡大学夏洛特分校假文凭定制Q微168899991做UNCC留信网教留服认证海牙认证改UNCC成绩单GPA做UNCC假学位证假文凭高仿毕业证GRE代考如何申请北卡罗莱纳大学夏洛特分校University of North Carolina at Charlotte degree offer diploma Transcript
Vicinity Jobs’ data includes more than three million 2023 OJPs and thousands of skills. Most skills appear in less than 0.02% of job postings, so most postings rely on a small subset of commonly used terms, like teamwork.
Laura Adkins-Hackett, Economist, LMIC, and Sukriti Trehan, Data Scientist, LMIC, presented their research exploring trends in the skills listed in OJPs to develop a deeper understanding of in-demand skills. This research project uses pointwise mutual information and other methods to extract more information about common skills from the relationships between skills, occupations and regions.
The Impact of Generative AI and 4th Industrial RevolutionPaolo Maresca
This infographic explores the transformative power of Generative AI, a key driver of the 4th Industrial Revolution. Discover how Generative AI is revolutionizing industries, accelerating innovation, and shaping the future of work.
"Does Foreign Direct Investment Negatively Affect Preservation of Culture in the Global South? Case Studies in Thailand and Cambodia."
Do elements of globalization, such as Foreign Direct Investment (FDI), negatively affect the ability of countries in the Global South to preserve their culture? This research aims to answer this question by employing a cross-sectional comparative case study analysis utilizing methods of difference. Thailand and Cambodia are compared as they are in the same region and have a similar culture. The metric of difference between Thailand and Cambodia is their ability to preserve their culture. This ability is operationalized by their respective attitudes towards FDI; Thailand imposes stringent regulations and limitations on FDI while Cambodia does not hesitate to accept most FDI and imposes fewer limitations. The evidence from this study suggests that FDI from globally influential countries with high gross domestic products (GDPs) (e.g. China, U.S.) challenges the ability of countries with lower GDPs (e.g. Cambodia) to protect their culture. Furthermore, the ability, or lack thereof, of the receiving countries to protect their culture is amplified by the existence and implementation of restrictive FDI policies imposed by their governments.
My study abroad in Bali, Indonesia, inspired this research topic as I noticed how globalization is changing the culture of its people. I learned their language and way of life which helped me understand the beauty and importance of cultural preservation. I believe we could all benefit from learning new perspectives as they could help us ideate solutions to contemporary issues and empathize with others.
13 Jun 24 ILC Retirement Income Summit - slides.pptxILC- UK
ILC's Retirement Income Summit was hosted by M&G and supported by Canada Life. The event brought together key policymakers, influencers and experts to help identify policy priorities for the next Government and ensure more of us have access to a decent income in retirement.
Contributors included:
Jo Blanden, Professor in Economics, University of Surrey
Clive Bolton, CEO, Life Insurance M&G Plc
Jim Boyd, CEO, Equity Release Council
Molly Broome, Economist, Resolution Foundation
Nida Broughton, Co-Director of Economic Policy, Behavioural Insights Team
Jonathan Cribb, Associate Director and Head of Retirement, Savings, and Ageing, Institute for Fiscal Studies
Joanna Elson CBE, Chief Executive Officer, Independent Age
Tom Evans, Managing Director of Retirement, Canada Life
Steve Groves, Chair, Key Retirement Group
Tish Hanifan, Founder and Joint Chair of the Society of Later life Advisers
Sue Lewis, ILC Trustee
Siobhan Lough, Senior Consultant, Hymans Robertson
Mick McAteer, Co-Director, The Financial Inclusion Centre
Stuart McDonald MBE, Head of Longevity and Democratic Insights, LCP
Anusha Mittal, Managing Director, Individual Life and Pensions, M&G Life
Shelley Morris, Senior Project Manager, Living Pension, Living Wage Foundation
Sarah O'Grady, Journalist
Will Sherlock, Head of External Relations, M&G Plc
Daniela Silcock, Head of Policy Research, Pensions Policy Institute
David Sinclair, Chief Executive, ILC
Jordi Skilbeck, Senior Policy Advisor, Pensions and Lifetime Savings Association
Rt Hon Sir Stephen Timms, former Chair, Work & Pensions Committee
Nigel Waterson, ILC Trustee
Jackie Wells, Strategy and Policy Consultant, ILC Strategic Advisory Board
New Visa Rules for Tourists and Students in Thailand | Amit Kakkar Easy VisaAmit Kakkar
Discover essential details about Thailand's recent visa policy changes, tailored for tourists and students. Amit Kakkar Easy Visa provides a comprehensive overview of new requirements, application processes, and tips to ensure a smooth transition for all travelers.
Dr. Alyce Su Cover Story - China's Investment Leadermsthrill
In World Expo 2010 Shanghai – the most visited Expo in the World History
https://www.britannica.com/event/Expo-Shanghai-2010
China’s official organizer of the Expo, CCPIT (China Council for the Promotion of International Trade https://en.ccpit.org/) has chosen Dr. Alyce Su as the Cover Person with Cover Story, in the Expo’s official magazine distributed throughout the Expo, showcasing China’s New Generation of Leaders to the World.
Independent Study - College of Wooster Research (2023-2024) FDI, Culture, Glo...AntoniaOwensDetwiler
"Does Foreign Direct Investment Negatively Affect Preservation of Culture in the Global South? Case Studies in Thailand and Cambodia."
Do elements of globalization, such as Foreign Direct Investment (FDI), negatively affect the ability of countries in the Global South to preserve their culture? This research aims to answer this question by employing a cross-sectional comparative case study analysis utilizing methods of difference. Thailand and Cambodia are compared as they are in the same region and have a similar culture. The metric of difference between Thailand and Cambodia is their ability to preserve their culture. This ability is operationalized by their respective attitudes towards FDI; Thailand imposes stringent regulations and limitations on FDI while Cambodia does not hesitate to accept most FDI and imposes fewer limitations. The evidence from this study suggests that FDI from globally influential countries with high gross domestic products (GDPs) (e.g. China, U.S.) challenges the ability of countries with lower GDPs (e.g. Cambodia) to protect their culture. Furthermore, the ability, or lack thereof, of the receiving countries to protect their culture is amplified by the existence and implementation of restrictive FDI policies imposed by their governments.
My study abroad in Bali, Indonesia, inspired this research topic as I noticed how globalization is changing the culture of its people. I learned their language and way of life which helped me understand the beauty and importance of cultural preservation. I believe we could all benefit from learning new perspectives as they could help us ideate solutions to contemporary issues and empathize with others.
Discover the Future of Dogecoin with Our Comprehensive Guidance36 Crypto
Learn in-depth about Dogecoin's trajectory and stay informed with 36crypto's essential and up-to-date information about the crypto space.
Our presentation delves into Dogecoin's potential future, exploring whether it's destined to skyrocket to the moon or face a downward spiral. In addition, it highlights invaluable insights. Don't miss out on this opportunity to enhance your crypto understanding!
https://36crypto.com/the-future-of-dogecoin-how-high-can-this-cryptocurrency-reach/
OJP data from firms like Vicinity Jobs have emerged as a complement to traditional sources of labour demand data, such as the Job Vacancy and Wages Survey (JVWS). Ibrahim Abuallail, PhD Candidate, University of Ottawa, presented research relating to bias in OJPs and a proposed approach to effectively adjust OJP data to complement existing official data (such as from the JVWS) and improve the measurement of labour demand.
Bridging the gap: Online job postings, survey data and the assessment of job ...
Econometrics: Basic
1. Giulio Laudani #13 Cod. 20191
Econometrics
Black-Litterman Model ...................................................................................................................................................... 1
OLS..................................................................................................................................................................................... 2
VAR and volatility estimation ............................................................................................................................................ 4
Stock for the long run........................................................................................................................................................ 6
Style analysis (OLS application) ......................................................................................................................................... 6
Principle component ......................................................................................................................................................... 7
Logarithmic random walk.................................................................................................................................................. 8
Types of return and their properties................................................................................................................................. 9
Markowitz optimization portfolio (Algebra calculus application)................................................................................... 10
Probability Mathematics and Laws ................................................................................................................................. 11
Matlab question .............................................................................................................................................................. 12
Black-Litterman Model
1
Black-Litterman modelscope is to estimate the market expected return avoiding the Markowitz optimization pitfall . The basic
idea is to use as weights for the market allocation, the ones computedstarting from those provided by some well diversified
indexand by adjusting them with our views as departures from that index asset allocation. It is an application of the Bayesian
statistic, basically we want to find a new distribution under some new information provided by us.
The methodology proposedconsists of a multi-step process:
At first We should perform the estimations of the B-L variables:
o We will chose a market index, from whom we will obtain the corresponding weights. Here we are making
some assumptions on the index. The chosen market proxy should be mean variance efficient. This assumption
is not really strong, in fact it is reasonable that a market proxy is at least not too much mean variance
2
inefficient. However we should remember that a sub set of an efficient portfolio is not in general efficient .
o The available market information are distributed according to a Normal , where the mean
is equal to the estimated market expected return and the Var-Cov matrix times a scalar smaller than one
o We already know the relationship between Var-Cov, weight, market return and risk aversion coefficient, as it
has been defined by Markowitz optimization, hence it is possible to invert that formula and to find out the
implicit market expectation
Since theestimatedexpected market returnhighly depends on the choice of the proxy index, to lessen
the problem, we should use a big portfolio. However the bigger the portfolio the harder/ numerical
demanding is the computation power required, so to maintain a numerical manageability we can
deepen the use ofCAPM:we are going to use a big portfolio and we willfind for each of our
3
securitiesjust the betas, hence we don’t need to estimate the whole Var-Cov matrix
o The Γ is the transforming parameter of the Var-Cov matrix, its meaning is to account the relative importance
gives to the market or our view info, it is important the ratio between it and the view matrix. The higher the
ratio the higher the confidence in the market
1
The problem overcome by this model is the high volatility of the historical return, which doesn’t allow to define narrow confidence interval at high probability level. Due to this high
volatility, there is an high sampling error, which doesn’t allow to use the Markowitz method to properly find out the weights of the market portfolio
2
It could be the case only if the sub portfolio has been built by random sampling technique, so that it has the same sub class exposure
3
There is a drawback, stocks with low correlation with the market tend to give unstable results, so it is necessary to implement a multifactor model
1
2. Giulio Laudani #13 Cod. 20191
o We will make some assumption on the Var-Cov matrix. The matrix is usually estimated by monthly historical
data (usually a tree years time frame) or by smoothed estimates
A typical problem in the Var-Cov matrix is the overestimation correlation, which will lower the
positive effect of diversification, if two securities have similar expected return and high correlation
there will be an over concentration on the asset with higher expected return. There exist a
4
procedure to lowering this problem, that is similar to the adjusted beta.
o The risk aversion parameter (assuming the absence of the risk free asset) is given by the Markowitz formula:
variance over expected excess return. Note the denominator is an a priori guess since it is what we are looking
for, we can use an iterated process
o The views must be given in a numerical form, so that it is possible to immediately check the effect on the
allocation.The asset manager views consist on portfolio return, which are summarized by a Normal with mean
the expected return of the portfolio (given the manager views) and a diagonal Var-Cov matrix expressing the
5
confidence on those views
Where P is the weights that allow to have V expected return given the expected return of the securities in the
market
Given all the previously information the Black and Litterman proposes to combine those two set of information using an
optimization equation, aiming to minimizing the distance between our parameter and both the market’s and manger’s
information.
Note that if we will use only the market portfolio information the investor will end up with the market portfolio itself,
the innovation of the model is the possibility to add views and so to have a different allocation
The solution can be express in two ways as well:
Which can be seen as the tangency portfolio in the Markowitz optimization theorem, where to the we will add a
spread position representing the view correction
Which is like a weighted average. The equivalent weights are
the parameter g is a constant that made the weights sum equal 1
OLS
OLS, ordinary least square is a method used to estimate the regression parameters of linear regression: a set of variables
regressed on common factors. It assumed a linear relationship between the depend variable and the weights, not for the
independent variable.
Besides OLS there exist other methods to estimate the regression parameters: Moments and maximum likelihood. The OLS
estimates consists of minimizing the the sum squared error 6 we want that our model on average is
equal to Y. This method is preferred because has an analytic solution and under certain hp is superior to any other methods as
proofed by Gauss Markov theorem. An estimator need to have an important feature to be useful, that is un-biasness and if it
cannot be achieve we need to require consistency, which is an asymptotic property which require less hp on the error and its
correlation with the independent variables.
Setting the first derivatives to 0 (it is a sufficient condition since the function is concave) we end up with the
which and
4
You will blend (meaning a weighted average) the estimated matrix with a reference one made of one in the diagonal and the average of the off diagonal elements of the estimated
matrix
5
We should define the matrix value so that to ideally built a IC at 95% in which our views are restrained
6
doing thefirst derivatives we end up with
2
3. Giulio Laudani #13 Cod. 20191
from this formula we see that to increase the estimation quality by increasing the range of the independent variables.
As the formula shows the depend variable randomness comes from the presence of the error. Hence the conditional and
unconditional distribution are the same, furthermore under the weak hp we can say:
E( ) = Y and V( )=
OLS requires certain hp to properly work and to allow the user to infer IC
Weak hp are three and they will ensure all together that OLS estimators are BLUE:
o The expected value of the error is 0 (it is always the case if the intercept is included in the model) and they are
not correlate with the X; if X is random we should request
o The variance of error is constant and the correlation among errors is 0, if this hp fail, so that we can
still estimate the β with the generalized least method where it is still BLUE. We
need to transform the original equation so that to have another one with . Here the proof:
in GLS the
o Note the
o The matrix X is a full ranked one to avoid multi-collineratity and to ensure the matrix (X’X) to be invertible. The
effect of the multi-collinearity is the increase of the betas variance
The Gauss Markov theorem states that is BLUE by using the definition of variance efficiency which states that if
and are both unbiased estimated we can say that is not worse than iif V( is at least psd.
we should also consider that if we want to estimate a set of linear functions of where H is not random
the definition of BLUE estimator is invariant to this. We call this property “invariance to linear transform” and it is the
stronger argument in favor ofthis definition of ’not worse’ estimator. An implied hp in the previously theorem is that
the class of estimators is to be linear on the dependent variable.
Strong HP are two: the error are independent one to each other and to X, hence they are distribute as a Normal. It
follows that even the beta has the same distribution since they are linear combination of the errors.Under those hp we
can built confidence of interval and test the statistical meaningfulness of the model parameters.
There are severaltest used in statistics to assess the fit and the overall and one by one coefficient significance
o The t-ratiosince the error variance is unobservable we use the sample variance so instead of the Gaussian
distribution we are going to use the t-student distribution the percentile za to define the IC and to see if
the 0 is included or we can use the “a” p-value; “n” is the degree of freedom, each of them will be used for
each estimators.The general idea is to divide the numerator (hp) by its standard deviation. The paired sample
is a procedure to test the difference between estimator by considering the difference “d”introduced as a new
parameters in the model. Hence The standard deviation is automatically computed by the model and it will
7
consider the effect of the potential correlation among estimators
o The F-testto test more/jointly hp. The F ratio is defined as ; where k is the
# of parameters “q” is the number of factors tested. The F-test on one variable in general gives the same result
of a two side t-test
7
Positive corr will reduce the variance
3
4. Giulio Laudani #13 Cod. 20191
2
o The R if the constant is included in the regression or better if . This
measure can only be reduce by adding new independent variables. Note that if we are in the univariate the
since y is a linear combination of x.
Some consideration based on exams test:
o Cov(r1,r2) where both return has been computed on the same factors it is equal to
o Remember that the expected value of each betas in a multivariate regression is the real beta and that the
difference of any pair of betas is always
o If we us the estimated OLS parameter to make an inference in the region outside the X used (forecasting) you
have to assume that the beta in the new region are the same and are still distributed according to a normal
with the same parameters. The target function to estimate the IC is . If we are doing
an IC for the Forecast the IC will
o If the constant is included in the model we have and the fitted value of y on the average
value of the X is the average of the fitted value itself, which is equal to the average of the real y as well.
; if we use a model without intercept the
o
o The Mean square error where T is the estimators is the real value
o If we do not consider the complete model, but we miss to consider one independent variable which is
correlated with the included variables, there will be a bias in our coefficient since they will be correlated with
the error
o If the intercept is excluded in the model (and it is effectively different form zero) than the estimates of the
betas are biased. However if the intercept is really 0 the coefficient variance will be lower.
o If the Cov(Xi;xj)=0 than each Beta could be estimated by the univariate formula
Where V is the Var-Cov Matrix of X, if the statement it is true that matrix is a diagonal
VAR and volatility estimation
Before talking about the VAR and its estimation procedure, we should spend some word on the volatility itself, on its meaning
and on how to estimate.
In the finance field the volatility is used as a measure of risk to have a sense of the unpredictability of an events. It is usually
computed looking to the historical trend of a variable or by looking to the derivatives market, that is the implied volatility,
which is the one making true the market price given the other variable using a pre-specified pricing formula
8
In the finance field Tails behavior is essential toestimateVaR , which is used to assess the max possible future loss on a certain
time interval. The VAR inputs are the exposure amount and the percentile indicating the given probability to experiment a loss at
least equal to the one indicated by the percentile itself. As we can see the hp on the distribution of the Tails of the return is the
key to ensure meaningfulness to this tool. The book propose four possible data distributions:
The Parametric one is the first methodology proposed. It consists ofa gauss distribution with parameters infer from
historical data.The parameters needed are the to find any quintile. Those parameters are estimated by
historical data, in detail the volatility is estimated using the Riskmetrix.
o Our goals are to estimate the quintile and the low bound since the variance it is estimated
8
A general limit of the VAR methodology is that it doesn’t give information on the event that is causing the loss, but it gives only the probability of that event. It also ignores the
distribution after the quintile estimated, furthermore it is a pro-cyclical measure, in fact since many of the methodologies proposed use historical parameters (or more in general data)
from the past time interval, a positive (negative) trend will bring a positive (negative) momentum that will biased the estimation downgrade (upgrade)
4
5. Giulio Laudani #13 Cod. 20191
The where we are assuming as a proxy
So the lower bound is
o This method has several limits highlighted by empirical evidence, in fact the underlying hp on Gaussian returns
is counter proofed by data.
Mixture of gauss distribution. It consists of the mixture of two or moreGaussian or not distribution with different
parameters weighted with the probability of occurrence. The general idea is to use for the first the normal case
parameters, while for the second the exceptional one. The blended distribution can be computed only numerically by
maximum likelihood method
where is estimated before running the quasi-likelihood function (log form);
however the tails will decline still at an exponential rate like the Gaussian Distribution; this methods is like a Garch
model with infinite components, so the unconditional distribution becomes a non-constant variance
The Non parametricconsist of using a theoretical distribution based on a frequency probabilistic approach. We will use
as distribution the cumulative function, no parameter needed
o The confidence interval are built starting by finding the i-esimo observation from whom we have the wanted
empirical probability using the frequentistic approach
o To find out the low bound we will need to compute the volatility of the frequentistic probability
We will compute the probability of occurrence of that i-esimo observation using a binomial
distribution,
The cumulative distribution:
With a n>> the distribution converge to a Gaussian distribution
where j is the number of the ordered
observation which maximize the so it is the lower bound
o The drawbackis givenby the few insight provided for extreme quintile since the observations became either
granular (be non-contiguous) or totally absent, hence this method is weak against alternative parametric
distribution (high sampling error)
Semi-parametricis a blend of a parametric model to estimate the central value (close to the mean) and a non-
parametric one for the tails, while the non-parametricpart is to find where to plug in the tailor model
o The parametric partfor the central value is a gauss distribution as in the parametric one
o The non-parametric part suggested to estimate the tailor data consists in building a function to approximated
the behavior of tails data.For where L(.) is a slow varying function and a is the speed
with which the tails goes to 0
o To estimate “a” (the only parameter) we use the formula to represent the log frequency distribution
Where represents all the approximation made, C is a conant, in fact the ln of slow varying function is basically
a constant a is estimated using OLS; polynomial declining rate for tails
o Then we graphically search for the plug in point, which is the point form where the empirical cumulative
distribution start to behave as a linear function
When we have found the sub set of data, we will use them to estimate the quintile with:
Hence given a and the probability and the first point where to start the plug in
The procedure to find the low bond from the quintile probability is since:
5
6. Giulio Laudani #13 Cod. 20191
Where the low bound is
Stock for the long run
Stock for the long run is a common mistake in the finance field. It states that an investor should choose its own investment
strategy choosing the stock with the highest expected return without considering the underling risk. This statement is based on
twoex-ante and one ex-post hp, those hp came from the intuition (LRW world) that after a certain time period any kind of return
can be achieved, regardless the risk:
First hp: given the Sharpe ratio formula the idea is that with a sufficient big n any result can be acquired, or in
other word there is time interval in which the probability to obtain a given expected return is reached (usually with a
confidence of 95%) it is a direct consequence of the LRWhp that states that return grows at a rate “n” while volatility
9
grows at a rate equal to the “square root of n”
Second hp: taking two investment strategies with same mean and variance, one in 10 uncorrelated securities for one
year and the other in just one security for 10 year, the hp will suggest the existence of time diversification
Third hp (a posterior): looking the historical performance of the US stock exchange it makes sense to invest on it
compared to other investment strategy
As it can be seen It is a consequence of how we built confidence of interval, however it can be proven wrong:
First critique: it made some hp on the investors utility function, meaning how he will choice his investment strategy.
The statement is assuming that investors will choice only comparing the Sharpe ratio on the long run, and that they
won’t change their strategy. There is another comments to be done regarding the strategy: assuming to be confident on
the criterion of Sharpe only for a certain long time frame, but be against the investment using the same criterion for
each of the period subset, it is like assuming a peculiar Utility function of the investors; not only the statement is wrong,
in fact the investment is not superior for any given horizontal period, but for sufficient long time horizon this strategy
seems to be the best one among all the other possibilities. Furthermore, since we are interested in the total return (not
in the expected return) we notice that the range of possible total return ( will increase at “n” rate over
time, hence the uncertainty is not declining
Second critique: it is an error based on wrong idea that two investment strategy with different time frame are
comparable, thus there is not any kind of time diversification.
Third critique: since the US stock market has shown the highest return on the last century you should invest in stock.
This is an ex post statement and it cannot be proven for the future, in fact the positive US trend has been sustained by
the economic growth of that economy, and we cannot infer from historical data a similar future success
Style analysis (OLS application)
Style analysis is a statistical way to compare asset manger performance with a specified ex post portfolio built using market
index, meaning we want to know if the manger has been able to over-perform the market performance, hence if he had
deserved the management’s fees. This capability to add value is not replicable by investor using public information it is an ex-
post analysis
The suggested methodology consist of regressing the fund return on some indexes, which are subjectively assumed by
the investors to be a good proxy of management strategy
We will consider the spread of the return and the estimated return (hence we are considering the error) and we will
analyze if its mean is statistical significant and if the cumulative sum of the error show any trend
Sharpe suggests to build the model following this procedure:
o Set the constrain on the beta, they must sum to one and without intercept, this can be done by an ex-ante
method or an ex-post one (normalizing the value), keep in mind that those methods don’t give same results.
This is a simplification made by Sharpe to ensure a self-financing strategy and to avoid the presence of a
10
constant return over time (even risk free cannot achieve this result )
o The regression is made on subset of constant periodlength, making them moving forward one by one
The critiques of this methodology consists of three points:
9
The theme of the explosion growth in the unitary coefficient auto regressive model, where LRW is one of those.
10
Theory of finance justifies this statement. We can use sort term risk free rate investment
6
7. Giulio Laudani #13 Cod. 20191
oThe weights has been set to maintain constant relative proportion is a limit and a costly strategy, there exist
alternative: buy and hold or trend strategy or even within the constant weight we can have changes
o If the fund manager knows how he will be judge and he knows more than investor regarding the composition
of the market portfolio, he can easily over perform the market, but it is not an easy task to replicate the market
portfolio ex ante
o The analysis is not considering the difference of variance produce by the two strategy and this can give an
advantage to the fund manger
There are three possible decision regarding the analysis depending on the error value
o The cumulative error is negative, and it is a strong evidence against the fund performance, since it is more
efficient a totally passive strategy
o The cumulative error is equal or not statistically significantly different from 0, it is hard to assess if the
management performance is not satisfying
o The cumulative error is positive, it cannot consider an evidence of the goodness of a management team since
this measure alone it is affected by many strong simplifying assumptions and doesn’t consider the volatility
difference between the passive strategy implemented and the effective one
Principle component
One of the key task in the asset management industry is to estimate the where are the price of
risktimes the beta (sensitivity). Those price are usually proxy with portfolio return, the problem is that we need to
joint estimate both the factors and the betas, so we have an infinite range of possible weight to be used as solution.
There exist two methods to test the meaningfulness of a model: the first one is to check if the intercept is equal zero,
however it is not really powerful and furthermore is not a good criterion (we may have really good fitting model
which fails the test); the other one is to test the linear relationship between return and beta. This last method is a
two-step process, at first we estimate the beta for each portfolio, than we will run a cross sectional regression to
check that the estimated beta and factor are consistent with market data11. We may add other term like square of
beta or error terms to see if those term are meaningful.
Principal component is an old and alternative method to estimate factors and betas by using the spectral theorem, where the
number of principal components is less than or equal to the number of original variables. The rational of the method is to proxy
the unobservable factor with portfolio return, which are built up to be sensible to constrains, basically we chose as factor
. We need to jointly estimate the factors and the betas.
This transformation is defined in such a way that the first principal component has as high a variance as possible (that is,
12
accounts for as much of the variability in the data as possible ), and each succeeding component in turn has the highest
variance possible under the constraint that it be orthogonal to (uncorrelated with) the preceding components, hence the first
elements of the error matrix is smaller than the smaller of the factors’. Principal components are guaranteed to be independent
only if the data set is jointly normally distributed. PCA is sensitive to the relative scaling of the original variables
Assuming to know the varianceand to have a time independent Var matrix. This last assumption is added just to simply
calculus, in fact there exist more complex methodologies to apply Principal component. Returns’ variance can be
represented by the spectral theorem. Other assumption is that V(r) is a full rank matrix, thus if k is the rank it is equal to
m which is the number of returns used
o Where x is the eigenvectors and the is the diagonal matrix which has been ordered from the highest to the
13
smallest value starting from the upper left position
11
To increase the power we group the return in box which maximize the distance between observation
12
By possible we mean given the constrain on the squared sum of the weights to be equal one, otherwise there won’t be a boundsince it can be arbitrary change by multiplying by a
constant. There exist other alternative such using the module, however those methodologies doesn’t allow an analytic solution
13
Remember that the Var-Cov matrix is a PD, otherwise (PSD) we cannot directly apply the theorem. The is the characteristic equation. Which is of order equal to the rank
of the Var-Cov matrix, so it can be solved only numerically
7
8. Giulio Laudani #13 Cod. 20191
o The factors proposed are portfolio return, computed using the eigenvector and the market returns. Since each
portfolio is made by where x is the eigenvector, each of this portfolio is independent to the other so
we can use the univariate formula to compute our beta
so The beta are the eigenvector for the specified factor
o The variance of this factors is equal to the diagonal matrix in the spectral decomposition and it is a diagonal
;
o Since our model completely explains the return behavior, so to change it in a model more close to the common
regression we will rearrange the formula.We will divide the factors in two group. The first one will be the
variables matrix, the residual will be the error matrix.
The residual matrix will have mean 0 and it is uncorrelated with the factors
Thus the Var-Cov highest value of the residual will be smaller than those of the factors one
The factors matrix rank is equal to q where q is the number of factor considered (q=j)
There is drawback in this methodology, it doesn’t generally respect the pricing theory which state that there should not
be extra remuneration not to bear any risk, in fact the residual can be correlated to some return and so they are not
idiosyncratic, and furthermore this risk is not negligible an asset even if it’s not correlated with the factors included
can have an excess return
There is another way to built principal component by maximize the portfolio risk with the constrain that each portfolio
is orthogonalto each other components and that the sum of the squared weights is set to one
o We will built a Lagrangian function to maximize the variance under the constrain, we will end up with that the
weights are the eigenvectors and the variance is the diagonal elements of the spectral theorem decomposition
of the variance of the return
The constrain is made to have an analitic solution, even if it doen’t have an economic meaning, in fact in
14
general the linear combination of and return is not a portfolio
o The book suggest to see the marginal contribution of the total variance of each component to notice how
basically all the variance is explained by the first three components
Assuming an unknown Var-Cov matrix : we can start from an a priori estimate of V(r) using historical data, however
there could be the case that the quality is to low, that’s way it is suggested another methodology. We can start to
estimate each components, starting from the highest, one by one.
o This method consists of maximize the variance with the usual constrain x’x=1 leaving all the estimation error in
the last component, since we can better off the estimate of the first one
Logarithmic random walk
In finance we are interested to forecast return, however the uncertainty around return is not predictable (differently form game
of chance) so we need to make assumption on possible probability distribution. One of the first model used is the LRW. It
assumed that the price evolution over time are approximated according to a stochastic difference equation
As it is shown by the equation the current price level depends on the past evolution and on an idiosyncratic component, so it’s
like saying that price movement are led by a modeled chance, meaning the underling distribution is assumed to be Gaussian. The
log form is used since it allows for multi-period return to preserve normality since the log of a product is the sum of the log
(linear function preserve the underling distribution). The idiosyncratic component has zero mean, constant variance and the
covariance among error across time is 0. Sometimes it is added the hp that those error are jointly normal distributed, hence they
are independent one to each other consistently with the time window, in fact if the observation are aggregated between period
the new idiosyncratic component will be not correlated only for the new time window, but it will be correlated with the middle
one, hence those middle observations must be dropped. Note that To Aggregate overtime the variance with a correlation
14
We can use the absolute sum of the , but only numerical solution are available
8
9. Giulio Laudani #13 Cod. 20191
structure between correlated observation is not any more “n” times the one period variance but ,
hence the variance will increase an higher rate if the correlation is positive compared to the LRWvariance.
Nowadays Logarithmic random walk is simply used as descriptive method used to made accruals on returns, since no other
alternative has reach enough consensus in finance field, however the LRWhp are counter-proofed by empirical dataPrice don’t
evolve by chance as suggested by LRW&There exist a strong empirical evidence against constant variance and in favor of the
presence of correlation among securities; it can lead to negative Price level
The accrual convention consists of annualizing return bymultiplying the expected return and volatility of one period by number
of period or by its square root, which is the correct procedure in case of LRW, while it is an accrual for the securities
Another proposed model is the Geometric RW which is basically the LRW applied on price instead of return, this model
has a log normal distribution (hence a positive skewedness, which is related to the number of period considered). Some useful
properties are: it cannot become negative, volatility is function of the level of price (lower for small price. Bigger for big one)
Types of return and their properties
Even if in finance we are interested on the price evolution over time, all the models and assumptions are based on returns. The
easiest hp made on their possible evolution is basically supposing the existence of a stationarity process in the price, this
statement is a big contentious in finance.There are two typologies of return, neither of them is better than the other, itdepend
on what we want to do:
15
Linear are best used for portfolio return over one time period to compute expected return and variance of portfolio,
while the log return of portfolio doesn’t have alinear function to put together securities, hence any different
combination of stocks have a not linear relationship making incredible difficult any optimization problem, because the
return are even a function of the stocks return. not lin.
Logarithmic are best used for single stock return over time, in this case the return will only depend on the initial and
last elementof the time series ;
The relationship between those returns can be better understood by using the Taylor expansion for the log return,
which is the same of the linear if truncated at the first parameter . This formula shows
how the difference between the linear and log return (for price ratio far away from 1) will be always greater than zero,
since .
In finance the ratio of consecutive prices (maybe corrected by taking into accountaccruals) is often modeled as a random
variable with an expected value very near to 1.This implies that the two definitions shall give different values with sizable
probabilityonly when the variance (or more in general the dispersion) of the price ratio distributionis non-negligible, so that
observations far from the expected value have non-negligibleprobability. Since standard models in finance assume that variance
of returns increaseswhen the time between returns increases, this implies that the two definitions shallmore likely imply
different values when applied to long term returns.
The mean is hard to estimate due to the relative size of volatility, which is so big that basically the IC ends up with including the
0. Furthermore the increase of the frequency don’t provide any benefit
nothing change since the monthly
We are going to estimate the volatility measure using historical data, however there exist several procedure to be implemented
from the simplest based on the LRWhp to more complex one to properly address some empirically volatility features.
The one based on the LRW is simply the equally weighed sum of the difference between the i-observation and the
mean, however this measure has one big drawback that is the hp on the equality between the marginal contribution of
the new observation to improve the estimate and the one of the oldest.
15
; , it is linear
9
10. Giulio Laudani #13 Cod. 20191
To overcame this assumption forward a more market tailor procedure the financial industry has introduced the
Riskmetrix procedure: The new formula is an exponentially smoothed estimate with coefficient usually around 0.95,
16
with boundary level set to >0 &<1; with the hp of zero mean
Alternatively we can be written as * where the last term is zero for n>>
o
The drawback of this estimate is the loss of the un-biasness property (if the data have a constant volatility),
and the formula is basically like truncating the available infowith a daily data frequency at 1 years at most even
for high coefficient
it is the case only if wi=1/n
it is minimize with wi=1/n by doing the Lagrangian
The variance estimation (reducing the variance of the variance) on the other side is small relative to its estimation and it can be
improved by increasing the frequency
Where the fourth moment is computed assuming Gaussian return with and the ;
the monthly frequency formula becomes which is smaller than before
Both volatility estimation suffer of the so called ghost problem, meaning that extremely high new obs has a high impact in the
level of our estimates. This behavior is asymmetric in fact incredible low obs are capped and it is more severe for the classic
formula where the volatility level will change abruptly when the outlier goes out the sample or will be reduce at a rate 1/n if all
the sample is considered. In the case of the smoothed estimators we have a decking factor equal to 1/
Markowitz optimization portfolio (Algebra calculus application)
Markowitz optimization portfoliois a methodology to build mean variance efficient portfolio using a set of stocks. This in general
is not related with the CAPM, which is a general equilibrium model, however if we consider the whole market the Markowitz
optimization becomes the CAPM market portfolio itself
The model is considering that the criterion used in the market is the mean variance efficiency and theinvestment time
window is unique and preset at the beginning of the investment process (no change after that)
o The hp to apply this method are that we know both the expected values of the single stock and the Var-Cov
Matrix, if those assumption fail there will be problems on the error sampling side
o One possible solution: The portfolio is built to minimize the variance with the constrain to achieve a specific
return. One of the most important result is that the relative weight on the portfolio are the same and do not
depend on the chosen return. This is a first instance of the separation theorem, meaning that the expected
return that we want to achieve depend solely on the allocation between the risk free asset and the portfolio
o The same result can be achieve by maximize the return given a certain risk, the tangency portfolio in this case
is equivalent to the result of the previously equation. This a sort of mean variance utility function
The variance of the return is always equal to the portfolio one time the weight invest on it
The ratio of the expected value and its standard deviation is the same for all the portfolio, hence all
the portfolio have the same marginal contribution on the composition of the stock portfolio risk
We want to show to result the first one is simply that the and that the is the slope and
that is the Sharpe ratio, basically we want to show that all the portfolio have the same value:
16
This is a consequence of data property, the volatility of the mean is so high to not allow for small interval to have mean significantly different form 0, and a consequence to ensure a
more conservative volatility estimates good for long term investor not for trader or hedger.
10
11. Giulio Laudani #13 Cod. 20191
o If we consider the weight: and plug into the Markowitz lambda:
o If we plug this lambda in the Markowitz weight:
o We have to consider the market allocation
o The portfolio return will be: so computing the Expected and Variance of this equation and find
the and equal them so we will end up: and
Investors take on risk in order to generate higher expected returns. This trade-off implies that an investor must balance
the return contribution of each securities against their portfolio risk contributions. Central to achieving this balance is
some measure of the correlation of each investment’s returns with those of the portfolio
We do not believe there is one optimally estimated covariance matrix. Rather, we use approaches designed to balance
trade-offs along several dimensions and choose parameters that make sense for the task at hand.
o One important trade-off arises from the desire to track time varying volatilities, which must be balanced
against the imprecision that results from using only recent data. This balance is very different when the
investment horizon is short, for example a few weeks, versus when it is longer, such as a quarter or a year.
o Another trade-off arises from the desire to extract as much information from the data as possible, which
argues toward measuring returns over short intervals. This desire must be balanced against the reality that the
structure of volatility and correlation is not stable and may be contaminated by mean-reverting noise over
very short intervals, such as intraday or even daily returns
All the portfolio must have same Sharpe ratio if we can invested we can add as intercept the risk free rate, otherwise a
self-financing strategy should have an intercept of zero
assuming a portfolio equally weighted. The first
term is , the second term
Probability Mathematics and Laws
this is the conditional probability, this formula allows as to update probability with new
info, Bayes has proposed an alternative formula (more usable)
Any random variable has an associated distribution:
o Continuous case the “empirical” density is and the cumulative probability is
and it is a strictly increasing function
The concept of percentile (q bounded between 0 and 1) is the probability that
where z is the corresponding value which put q% of data/value
The excepted value is while the variance is those
are population moments, using real CDF
Each distribution is defined by three parameters
o Local are those which will shift the distribution to the right or left
o Shape are the residual definition
o Scale are those who will change the σ, anything else
11
12. Giulio Laudani #13 Cod. 20191
Possible distribution (useful)
o Binomial or Bernoulli distribution. The parameters are “n” number of experiment and “p”
probability of success for each experiment (it is assumed constant for all experiments)
K is the target success occurrence, for n>> the binomial will approximate a Gaussian Distribution
o Lognormal is a right skewed distribution
o Multivariate distribution is the distribution of the jointly behavior of two or more variable
Matrix operation:
o The rank
o A matrix is invertible iif it is full ranked and symmetric
o A must have same col of the B’s row, the resulting matrix is A’s row and B’s col
o
o
o
o :
o To sum rows:
o With two vectors we can built a matrix:
Inequalities
o Tchebicev Inequality ( the model consider the module, so we will account for both tail side)
o Vysochankij-Petun Inequality
o Cantelli one side:
Distribution measures:
o The skewedness measures the asymmetry of a distribution, it is the third moment centered. Positive
value will indicate right asymmetry, the opposite the left
o The kurtosis is a measure of the distribution in the shoulder or in the tails. The higher the value the
higher the concentration in the tails. It will be affected by asymmetry as well and it is always >0
Matlab question
Data=xlsread(‘nomefile’,’worksheet’,’range’); if “worksheet =-1” opens an Excel window to interactively
select data. Both string or number for worksheet
Xlswrite(‘nome file’, dati , ‘worksheet’,range)
Inv(A) is doing the inverse of the matrix
[coeff, latent]= pcacov(A) is doing the pc: where “coeff” stands for the eigenvectors, while “latent” for the
row of the lambda
Flipud(A) the last element becomes the first
12
13. Giulio Laudani #13 Cod. 20191
Cov(A) will do the Var-Cov matrix
for i=0:4:12 (…) end, where 0 is the starting value, 4 is length and 12 is the final value
13