The document summarizes key concepts related to the Arbitrage Pricing Theory (APT). It introduces matrix notation for factor models and decomposes return variance into factors and residuals. Three cases of factor model restrictions are described: exact factor model, strict factor model, and approximate factor model. The document then discusses no-arbitrage pricing conditions and derives the APT pricing restriction under an exact factor model, showing expected returns can be written as a linear combination of factors.
There are two types of C constants: primary and secondary. Primary constants include integer, real, character, string, and escape sequence constants. Integer constants can be decimal, octal, or hexadecimal. Real constants represent fractional values in decimal or exponential notation. Character constants are single characters within single quotes. String constants are sequences of characters within double quotes. Escape sequences use backslashes to represent special characters like newlines. Secondary constants include arrays, pointers, structures, unions, and enums.
The problem describes a "hare and hounds" road rally game where a hare selects a secret route and hounds try to follow it. The hare leaves clues at choice point intersections where the main route rule is broken. Hounds must try each path from choice points until finding a confirmation marker. The task is to simulate a hound following the hare's route, given the road map and clue locations. Input provides the map and clue details, and output should report the hare/hound path lengths and hare's route roads.
In this paper, we show how the ambiguities in derivation of the BSE can be eliminated.
We pay attention to option as a hedging instrument and present definition of the option price based on market risk weighting. In such approach, we define random market price for each market scenario. The spot price then is interpreted as a one that reflects balance between profit-loss expectations of the market participants.
In this paper, we show how the ambiguities in derivation of the BSE can be eliminated.
We pay attention to option as a hedging instrument and present definition of the option price based on market risk weighting. In such approach, we define random market price for each market scenario. The spot price then is interpreted as a one that reflects balance between profit-loss expectations of the market participants.
This document discusses the Black-Scholes pricing concept for options. It summarizes two common derivations of the Black-Scholes equation, the original by Black and Scholes which uses a hedged position consisting of a long stock position and short option position. An alternative derivation presented in some textbooks uses a hedged position consisting of a long option position and short stock position. The document also notes that while the Black-Scholes price guarantees a risk-free return at a single moment in time, it does not necessarily reflect market prices and there is no guarantee the market will use the Black-Scholes price.
The document discusses continuous random variables. It defines a continuous random variable as having a range of real numbers. The probability density function f(x) is used to determine probabilities from areas. The properties of f(x) are described. The cumulative distribution function F(x) gives the probability that the random variable is less than or equal to x. The mean, variance, and standard deviation of a continuous random variable are defined in terms of integrals of the random variable and its probability density function.
The document discusses the bisection method, a numerical method for finding the roots of a polynomial function. The bisection method works by repeatedly bisecting an interval where the function changes sign and narrowing in on a root. It involves finding an interval [a,b] where f(a) and f(b) have opposite signs, taking the midpoint x1, and discarding half of the interval based on the sign of f(x1). This process is repeated, halving the interval width each time, until a root is isolated within the desired accuracy. An example applying the bisection method to find the roots of f(x) = x^3 + 3x - 5 on the interval [1,2
This document discusses concepts related to calculus including limits, continuity, and derivatives of functions. Specifically, it covers:
- Definitions and theorems related to limits, continuity, and derivatives of algebraic functions.
- Evaluating limits, determining continuity of functions, and taking derivatives of algebraic functions using basic theorems of differentiation.
- The objective is for students to be able to evaluate limits, determine continuity, and find derivatives of continuous algebraic functions in explicit or implicit form after discussing these calculus concepts.
There are two types of C constants: primary and secondary. Primary constants include integer, real, character, string, and escape sequence constants. Integer constants can be decimal, octal, or hexadecimal. Real constants represent fractional values in decimal or exponential notation. Character constants are single characters within single quotes. String constants are sequences of characters within double quotes. Escape sequences use backslashes to represent special characters like newlines. Secondary constants include arrays, pointers, structures, unions, and enums.
The problem describes a "hare and hounds" road rally game where a hare selects a secret route and hounds try to follow it. The hare leaves clues at choice point intersections where the main route rule is broken. Hounds must try each path from choice points until finding a confirmation marker. The task is to simulate a hound following the hare's route, given the road map and clue locations. Input provides the map and clue details, and output should report the hare/hound path lengths and hare's route roads.
In this paper, we show how the ambiguities in derivation of the BSE can be eliminated.
We pay attention to option as a hedging instrument and present definition of the option price based on market risk weighting. In such approach, we define random market price for each market scenario. The spot price then is interpreted as a one that reflects balance between profit-loss expectations of the market participants.
In this paper, we show how the ambiguities in derivation of the BSE can be eliminated.
We pay attention to option as a hedging instrument and present definition of the option price based on market risk weighting. In such approach, we define random market price for each market scenario. The spot price then is interpreted as a one that reflects balance between profit-loss expectations of the market participants.
This document discusses the Black-Scholes pricing concept for options. It summarizes two common derivations of the Black-Scholes equation, the original by Black and Scholes which uses a hedged position consisting of a long stock position and short option position. An alternative derivation presented in some textbooks uses a hedged position consisting of a long option position and short stock position. The document also notes that while the Black-Scholes price guarantees a risk-free return at a single moment in time, it does not necessarily reflect market prices and there is no guarantee the market will use the Black-Scholes price.
The document discusses continuous random variables. It defines a continuous random variable as having a range of real numbers. The probability density function f(x) is used to determine probabilities from areas. The properties of f(x) are described. The cumulative distribution function F(x) gives the probability that the random variable is less than or equal to x. The mean, variance, and standard deviation of a continuous random variable are defined in terms of integrals of the random variable and its probability density function.
The document discusses the bisection method, a numerical method for finding the roots of a polynomial function. The bisection method works by repeatedly bisecting an interval where the function changes sign and narrowing in on a root. It involves finding an interval [a,b] where f(a) and f(b) have opposite signs, taking the midpoint x1, and discarding half of the interval based on the sign of f(x1). This process is repeated, halving the interval width each time, until a root is isolated within the desired accuracy. An example applying the bisection method to find the roots of f(x) = x^3 + 3x - 5 on the interval [1,2
This document discusses concepts related to calculus including limits, continuity, and derivatives of functions. Specifically, it covers:
- Definitions and theorems related to limits, continuity, and derivatives of algebraic functions.
- Evaluating limits, determining continuity of functions, and taking derivatives of algebraic functions using basic theorems of differentiation.
- The objective is for students to be able to evaluate limits, determine continuity, and find derivatives of continuous algebraic functions in explicit or implicit form after discussing these calculus concepts.
Chap05 continuous random variables and probability distributionsJudianto Nugroho
This chapter discusses continuous random variables and probability distributions, including the normal distribution. It introduces continuous random variables and their probability density functions. It describes the key characteristics and properties of the uniform and normal distributions. It also discusses how to calculate probabilities using the normal distribution, including how to standardize a normal distribution and use normal distribution tables.
1. The document discusses techniques for finding extrema of functions, including absolute and local extrema. Critical points, endpoints, and the first and second derivative tests are covered.
2. The mean value theorem and Rolle's theorem are summarized. The mean value theorem relates the average and instantaneous rates of change over an interval.
3. Optimization problems can be solved by setting the derivative of the objective function equal to zero to find critical points corresponding to maxima or minima.
4. Newton's method is presented as an iterative process for approximating solutions to equations, using tangent lines to generate a sequence of improving approximations.
5. Anti-derivatives are defined as functions whose derivatives are a given
This document provides an overview of solving nonlinear equations. It discusses solving single nonlinear equations and systems of nonlinear equations. It covers existence and uniqueness of solutions, multiple roots, sensitivity and conditioning. Iterative methods are used to solve nonlinear equations numerically. The bisection method is introduced, which successively halves the interval containing the solution until the desired accuracy is reached. The convergence rate of the bisection method is linear, meaning the error bound is reduced by half each iteration.
1. Continuous random variables are defined over intervals rather than discrete points. The probability that a continuous random variable takes on a value in an interval from a to b is given by an integral of the probability density function f(x) over that interval.
2. The probability density function f(x) defines the probabilities of intervals of the continuous random variable rather than individual points. It has the properties that it is always nonnegative and its integral over all x is 1.
3. The cumulative distribution function F(x) gives the probability that the random variable takes on a value less than or equal to x. It is defined as the integral of the probability density function from negative infinity to x.
This document discusses several methods for evaluating investment performance and risk-adjusted returns, including:
1. The Treynor and Sharpe measures, which calculate risk-adjusted excess returns using beta and standard deviation, respectively.
2. Jensen's alpha, which measures excess returns over the market risk premium.
3. The Modigliani & Modigliani (M2) measure, which adjusts portfolios to have the same standard deviation as the market.
4. Fama's decomposition, which breaks down excess returns into components attributed to risk, selectivity, diversification, and net selectivity.
5. Additive attribution, which attributes excess returns to allocation effects,
1. The document defines discrete random variables as random variables that can take on a finite or countable number of values. It provides an example of a discrete random variable being the number of heads from 4 coin tosses.
2. It introduces the probability mass function (pmf) as a function that gives the probability of a discrete random variable taking on a particular value. The pmf must be greater than or equal to 0 and sum to 1.
3. The cumulative distribution function (CDF) of a discrete random variable is defined as the sum of the probabilities of the random variable being less than or equal to a particular value. The CDF ranges from 0 to 1 and increases monotonically.
A new derivation of the Black Scholes Equation (BSE) based on integral form stochastic calculus is presented. Construction of the BSE solution is based on infinitesimal perfect hedging. The perfect hedging on a finite time interval is a separate problem that does not change option pricing. The cost of hedging does not present an adjustment of the BS pricing. We discuss a more profound alternative approach to option pricing. It defines option price as a settlement between counterparties and in contrast to BS approach presents the market risk of the option premium.
There are several significant drawbacks in derivative price modeling which relate to global regulations of the derivatives market. Here we present a unified approach which in stochastic market interprets option price as a random variable. Therefore spot price does not complete characteristic of the price in stochastic environment. Complete derivatives price includes the spot price as well as thevalue of market risk implied by the use of the spot price. This interpretation is similar to the notion of therandom variable in Probability Theory in which an estimate of the random variable completely defined by its cumulative distribution function
This document provides an overview of several performance measurement techniques used to evaluate investment portfolios including:
1) The Treynor and Sharpe measures which calculate risk-adjusted returns using beta and standard deviation, respectively.
2) Jensen's alpha which measures excess returns after adjusting for risk.
3) Fama's decomposition which breaks down excess returns into components attributed to risk and selectivity.
4) The Brinson attribution model which attributes excess returns to allocation effects, selection effects, and interaction effects based on differences from a benchmark.
The document discusses option pricing models including the binomial model and Black-Scholes model. The binomial model assumes the price of an asset can rise or fall in discrete time periods and that there is no arbitrage opportunity. The Black-Scholes model provides a way to price European style options and assumes the underlying asset follows geometric Brownian motion. It provides a closed-form solution using the cumulative distribution function and parameters like stock price, exercise price, risk-free rate, and volatility. Variants of the model have been developed to price options on dividend-paying stocks and American style options.
This document provides guidance on sketching graphs of functions by considering key features such as symmetries, intercepts, extrema, asymptotes, concavity, and inflection points. It then works through an example of sketching the graph of the function f(x) = (2x^2 - 8)/(x^2 - 16). Key steps include finding vertical and horizontal asymptotes, critical points and inflection points, intervals of increase/decrease, and finally sketching the graph.
In this short notice, we present structure of the perfect hedging. Closed form formulas clarify the fact that Black-Scholes (BS) portfolio which provides perfect hedge only at initial moment. Holding portfolio over a certain period implies additional cash flow, which could not be imbedded in BS pricing scheme, and therefore BS option price cannot be derived without additional cash flow which affects BS option price.
Probability mass functions and probability density functionsAnkit Katiyar
This document discusses probability mass functions (pmf) and probability density functions (pdf) for discrete and continuous random variables. A pmf fX(x) gives the probability of a discrete random variable X taking on the value x. A pdf fX(x) defines the probability that a continuous random variable X falls within an interval via its cumulative distribution function FX(x). The pdf must be non-negative and have an area/sum of 1 under the curve/over all x values.
Rational functions are functions of the form f(x)=polynomial/polynomial. There are six key aspects to analyze in rational functions: y-intercept, x-intercepts, vertical asymptotes, horizontal/slant asymptotes, and the graph. Vertical asymptotes occur when the denominator is 0, x-intercepts when the numerator is 0, and horizontal/slant asymptotes depend on the relative degrees of the numerator and denominator polynomials.
The document discusses the "Dragon" pattern in technical analysis. The Dragon pattern is a variation of double bottoms that forms at market bottoms. It consists of a "head" formation followed by two legs that form within 5-10% of each other, with the second leg signaling a reversal. A trendline is drawn connecting the head and first leg. A close above the trendline or "hump" area confirms the pattern. Rules are provided for entering long positions on breakouts and taking profit targets. The inverse Dragon pattern also trades in a similar manner for short positions. Real examples from futures charts are shown and discussed.
This document discusses operations on rational expressions, including:
1) Reducing rational expressions to lowest terms by dividing the numerator and denominator by their greatest common factor.
2) Multiplying rational expressions follows the same rules as multiplying fractions - the numerators are multiplied and the denominators are multiplied.
3) Dividing rational expressions follows the same rules as dividing fractions - the expression is written as the first rational expression over the second.
The document discusses antiderivatives and indefinite integration. It defines antiderivatives as functions whose derivatives are a given function. The general form of the antiderivative includes adding an arbitrary constant C, known as the constant of integration. It provides examples of using basic integration rules to find antiderivatives and solving differential equations. Initial conditions are introduced as a way to determine a particular solution when the general solution involves an arbitrary constant.
This document discusses portfolio theory and provides advice for effective investing. It covers intrinsic value analysis, growth versus value investing strategies, fundamental and technical analysis techniques, why trading strategies become outdated, common portfolio mistakes, the capital asset pricing model relationship between risk and return, advantages of ETFs, utilizing optimal asset mixes based on risk tolerance, and importance of patience, discipline and eagerness to learn when investing. Key investing advice is to construct a portfolio with a mix of low and high beta investment vehicles to balance risk and potential returns.
The document discusses the Capital Asset Pricing Model (CAPM) and the Arbitrage Pricing Theory (APT). It states that CAPM models the relationship between risk and expected return using the security market line. It assumes assets fall on this line based on their beta, which measures volatility relative to the market. APT also relates return to factors but considers multiple systematic factors rather than just the market. It suggests returns are influenced by factors like economic growth, interest rates, and inflation rather than just the market.
The document discusses the arbitrage pricing theory (APT), which relates a security's expected return to multiple common risk factors. It provides examples of how the APT can be used to model returns based on factors like inflation, GDP growth, and exchange rates. The APT assumes perfect capital markets, homogeneous investor expectations, and allows short selling and arbitrage opportunities. It implies a linear relationship between expected returns and factor sensitivities similar to the capital asset pricing model. Empirical tests provide some support for the APT but also have limitations.
The document discusses several assumptions of portfolio theory models including CAPM and APT. It assumes investors have homogeneous expectations, are risk averse utility maximizers, and operate in a environment of perfect competition with no transaction costs. The key aspects of CAPM discussed are the efficient frontier and relationship between risk and return. APT relaxes some CAPM assumptions and focuses on factor sensitivities driving returns rather than just beta. Arbitrage pricing looks for riskless profit opportunities by making small adjustments to portfolio weights.
This document discusses portfolio analysis and selection based on modern portfolio theory. It defines key terms like portfolio, phases of portfolio selection, the Markowitz model, efficient frontier, diversification and the optimum portfolio. The Markowitz model uses a mean-variance framework to identify efficient portfolios that maximize return for a given level of risk. An optimum portfolio provides the highest expected return for its risk level by balancing risk across different asset classes through diversification.
Chap05 continuous random variables and probability distributionsJudianto Nugroho
This chapter discusses continuous random variables and probability distributions, including the normal distribution. It introduces continuous random variables and their probability density functions. It describes the key characteristics and properties of the uniform and normal distributions. It also discusses how to calculate probabilities using the normal distribution, including how to standardize a normal distribution and use normal distribution tables.
1. The document discusses techniques for finding extrema of functions, including absolute and local extrema. Critical points, endpoints, and the first and second derivative tests are covered.
2. The mean value theorem and Rolle's theorem are summarized. The mean value theorem relates the average and instantaneous rates of change over an interval.
3. Optimization problems can be solved by setting the derivative of the objective function equal to zero to find critical points corresponding to maxima or minima.
4. Newton's method is presented as an iterative process for approximating solutions to equations, using tangent lines to generate a sequence of improving approximations.
5. Anti-derivatives are defined as functions whose derivatives are a given
This document provides an overview of solving nonlinear equations. It discusses solving single nonlinear equations and systems of nonlinear equations. It covers existence and uniqueness of solutions, multiple roots, sensitivity and conditioning. Iterative methods are used to solve nonlinear equations numerically. The bisection method is introduced, which successively halves the interval containing the solution until the desired accuracy is reached. The convergence rate of the bisection method is linear, meaning the error bound is reduced by half each iteration.
1. Continuous random variables are defined over intervals rather than discrete points. The probability that a continuous random variable takes on a value in an interval from a to b is given by an integral of the probability density function f(x) over that interval.
2. The probability density function f(x) defines the probabilities of intervals of the continuous random variable rather than individual points. It has the properties that it is always nonnegative and its integral over all x is 1.
3. The cumulative distribution function F(x) gives the probability that the random variable takes on a value less than or equal to x. It is defined as the integral of the probability density function from negative infinity to x.
This document discusses several methods for evaluating investment performance and risk-adjusted returns, including:
1. The Treynor and Sharpe measures, which calculate risk-adjusted excess returns using beta and standard deviation, respectively.
2. Jensen's alpha, which measures excess returns over the market risk premium.
3. The Modigliani & Modigliani (M2) measure, which adjusts portfolios to have the same standard deviation as the market.
4. Fama's decomposition, which breaks down excess returns into components attributed to risk, selectivity, diversification, and net selectivity.
5. Additive attribution, which attributes excess returns to allocation effects,
1. The document defines discrete random variables as random variables that can take on a finite or countable number of values. It provides an example of a discrete random variable being the number of heads from 4 coin tosses.
2. It introduces the probability mass function (pmf) as a function that gives the probability of a discrete random variable taking on a particular value. The pmf must be greater than or equal to 0 and sum to 1.
3. The cumulative distribution function (CDF) of a discrete random variable is defined as the sum of the probabilities of the random variable being less than or equal to a particular value. The CDF ranges from 0 to 1 and increases monotonically.
A new derivation of the Black Scholes Equation (BSE) based on integral form stochastic calculus is presented. Construction of the BSE solution is based on infinitesimal perfect hedging. The perfect hedging on a finite time interval is a separate problem that does not change option pricing. The cost of hedging does not present an adjustment of the BS pricing. We discuss a more profound alternative approach to option pricing. It defines option price as a settlement between counterparties and in contrast to BS approach presents the market risk of the option premium.
There are several significant drawbacks in derivative price modeling which relate to global regulations of the derivatives market. Here we present a unified approach which in stochastic market interprets option price as a random variable. Therefore spot price does not complete characteristic of the price in stochastic environment. Complete derivatives price includes the spot price as well as thevalue of market risk implied by the use of the spot price. This interpretation is similar to the notion of therandom variable in Probability Theory in which an estimate of the random variable completely defined by its cumulative distribution function
This document provides an overview of several performance measurement techniques used to evaluate investment portfolios including:
1) The Treynor and Sharpe measures which calculate risk-adjusted returns using beta and standard deviation, respectively.
2) Jensen's alpha which measures excess returns after adjusting for risk.
3) Fama's decomposition which breaks down excess returns into components attributed to risk and selectivity.
4) The Brinson attribution model which attributes excess returns to allocation effects, selection effects, and interaction effects based on differences from a benchmark.
The document discusses option pricing models including the binomial model and Black-Scholes model. The binomial model assumes the price of an asset can rise or fall in discrete time periods and that there is no arbitrage opportunity. The Black-Scholes model provides a way to price European style options and assumes the underlying asset follows geometric Brownian motion. It provides a closed-form solution using the cumulative distribution function and parameters like stock price, exercise price, risk-free rate, and volatility. Variants of the model have been developed to price options on dividend-paying stocks and American style options.
This document provides guidance on sketching graphs of functions by considering key features such as symmetries, intercepts, extrema, asymptotes, concavity, and inflection points. It then works through an example of sketching the graph of the function f(x) = (2x^2 - 8)/(x^2 - 16). Key steps include finding vertical and horizontal asymptotes, critical points and inflection points, intervals of increase/decrease, and finally sketching the graph.
In this short notice, we present structure of the perfect hedging. Closed form formulas clarify the fact that Black-Scholes (BS) portfolio which provides perfect hedge only at initial moment. Holding portfolio over a certain period implies additional cash flow, which could not be imbedded in BS pricing scheme, and therefore BS option price cannot be derived without additional cash flow which affects BS option price.
Probability mass functions and probability density functionsAnkit Katiyar
This document discusses probability mass functions (pmf) and probability density functions (pdf) for discrete and continuous random variables. A pmf fX(x) gives the probability of a discrete random variable X taking on the value x. A pdf fX(x) defines the probability that a continuous random variable X falls within an interval via its cumulative distribution function FX(x). The pdf must be non-negative and have an area/sum of 1 under the curve/over all x values.
Rational functions are functions of the form f(x)=polynomial/polynomial. There are six key aspects to analyze in rational functions: y-intercept, x-intercepts, vertical asymptotes, horizontal/slant asymptotes, and the graph. Vertical asymptotes occur when the denominator is 0, x-intercepts when the numerator is 0, and horizontal/slant asymptotes depend on the relative degrees of the numerator and denominator polynomials.
The document discusses the "Dragon" pattern in technical analysis. The Dragon pattern is a variation of double bottoms that forms at market bottoms. It consists of a "head" formation followed by two legs that form within 5-10% of each other, with the second leg signaling a reversal. A trendline is drawn connecting the head and first leg. A close above the trendline or "hump" area confirms the pattern. Rules are provided for entering long positions on breakouts and taking profit targets. The inverse Dragon pattern also trades in a similar manner for short positions. Real examples from futures charts are shown and discussed.
This document discusses operations on rational expressions, including:
1) Reducing rational expressions to lowest terms by dividing the numerator and denominator by their greatest common factor.
2) Multiplying rational expressions follows the same rules as multiplying fractions - the numerators are multiplied and the denominators are multiplied.
3) Dividing rational expressions follows the same rules as dividing fractions - the expression is written as the first rational expression over the second.
The document discusses antiderivatives and indefinite integration. It defines antiderivatives as functions whose derivatives are a given function. The general form of the antiderivative includes adding an arbitrary constant C, known as the constant of integration. It provides examples of using basic integration rules to find antiderivatives and solving differential equations. Initial conditions are introduced as a way to determine a particular solution when the general solution involves an arbitrary constant.
This document discusses portfolio theory and provides advice for effective investing. It covers intrinsic value analysis, growth versus value investing strategies, fundamental and technical analysis techniques, why trading strategies become outdated, common portfolio mistakes, the capital asset pricing model relationship between risk and return, advantages of ETFs, utilizing optimal asset mixes based on risk tolerance, and importance of patience, discipline and eagerness to learn when investing. Key investing advice is to construct a portfolio with a mix of low and high beta investment vehicles to balance risk and potential returns.
The document discusses the Capital Asset Pricing Model (CAPM) and the Arbitrage Pricing Theory (APT). It states that CAPM models the relationship between risk and expected return using the security market line. It assumes assets fall on this line based on their beta, which measures volatility relative to the market. APT also relates return to factors but considers multiple systematic factors rather than just the market. It suggests returns are influenced by factors like economic growth, interest rates, and inflation rather than just the market.
The document discusses the arbitrage pricing theory (APT), which relates a security's expected return to multiple common risk factors. It provides examples of how the APT can be used to model returns based on factors like inflation, GDP growth, and exchange rates. The APT assumes perfect capital markets, homogeneous investor expectations, and allows short selling and arbitrage opportunities. It implies a linear relationship between expected returns and factor sensitivities similar to the capital asset pricing model. Empirical tests provide some support for the APT but also have limitations.
The document discusses several assumptions of portfolio theory models including CAPM and APT. It assumes investors have homogeneous expectations, are risk averse utility maximizers, and operate in a environment of perfect competition with no transaction costs. The key aspects of CAPM discussed are the efficient frontier and relationship between risk and return. APT relaxes some CAPM assumptions and focuses on factor sensitivities driving returns rather than just beta. Arbitrage pricing looks for riskless profit opportunities by making small adjustments to portfolio weights.
This document discusses portfolio analysis and selection based on modern portfolio theory. It defines key terms like portfolio, phases of portfolio selection, the Markowitz model, efficient frontier, diversification and the optimum portfolio. The Markowitz model uses a mean-variance framework to identify efficient portfolios that maximize return for a given level of risk. An optimum portfolio provides the highest expected return for its risk level by balancing risk across different asset classes through diversification.
This document discusses portfolio theory and the Capital Asset Pricing Model (CAPM). It explains that diversification reduces risk through averaging effects. While specific risk can be diversified away, market risk cannot. Portfolio theory assumes investors are risk averse and seek the highest returns for a given level of risk. The CAPM uses beta to measure a stock's risk relative to the market and defines the security market line, where expected return is equal to the risk-free rate plus a risk premium based on beta. Stocks with beta greater than one are more volatile than the market.
Case study on financing capital for a new startupAmitava Sengupta
The document discusses various funding options for a large industrial gases company that needs Rs. 500 cr to install a new oxygen plant. The finance manager evaluates equity capital, retained earnings, debentures, bank loans, trade receivables and a new contract's assured cash flows. Combining rights share issuance, mid-term loans, retained earnings, factoring, securitization and equity with differential voting rights raises Rs. 513.5 cr, fulfilling the target amount. Internal sources and bank financing involve less risk than equity dilution or high-cost debt.
The document provides an overview of the Capital Asset Pricing Model (CAPM). It defines key terms like the capital allocation line, capital market line, security market line, beta, and expected return. The capital allocation line shows the risk-return tradeoff for efficient portfolios. The capital market line depicts the risk-return relationship for efficient portfolios available to investors. The security market line is a graphic representation of CAPM that describes the market price of risk. CAPM holds that the expected return of an asset is determined by its beta, or non-diversifiable risk. It assumes investors will hold an efficient portfolio consisting of a risk-free asset and the market portfolio.
Capital investment decision (cid) case solutionRifat Ahsan
This document contains a summary of two capital investment decision case analyses. The first case is about Cape Chemical co. and involves calculating financial metrics like WACC, NPV, IRR, and MIRR to evaluate whether to proceed with used or new equipment. The second case analyzes investing in a brewpub by estimating annual cash flows in best and worst scenarios and calculating NPV and IRR under each scenario to determine the project's viability. Excel is used to solve problems in both cases. The document also includes sections on capital budgeting theory, methods, and importance to provide context.
The document discusses the Sharp Index model for calculating the expected return of a portfolio. It provides the formula for calculating the Sharpe ratio which takes into account the variance of the market index, variance of individual stock movements not associated with the market, and the beta of each stock. It also mentions there are examples provided to demonstrate solving the Sharpe ratio formula.
Major South African bank expands investment in Microsoft Dynamics CRM 2013 Mint Group
Major South African bank expands investment in Microsoft Dynamics CRM 2013 for Case Management Solution in their Trust Services Department through Mint
1) Portfolio theory shows that risk and return are negatively correlated, and diversification across many assets reduces unsystematic risk. Only systematic risk cannot be eliminated through diversification.
2) The efficient frontier graphs the set of optimal portfolios that maximize return for a given level of risk. The capital market line depicts the combination of investments in the market portfolio and risk-free asset.
3) The Capital Asset Pricing Model derives the security market line relationship between risk and return, defining risk as an asset's beta coefficient measuring its volatility relative to the market.
The document discusses risk and return in investments. It defines key concepts such as realized and expected return, ex-ante and ex-post returns, sources and measurements of risk including standard deviation and coefficient of variation. It also discusses the risk-return tradeoff and how higher risk investments require higher potential returns to compensate for additional risk.
The document summarizes the Markowitz model for building optimal investment portfolios. It discusses key aspects of the model such as diversification to reduce risk, defining the efficient frontier of portfolios with maximum return for a given level of risk, and including both risky and risk-free assets as well as leverage to construct portfolios. The model provides a framework for investors to analyze risk and return tradeoffs across different portfolio combinations.
The document discusses risk and return in investing. It explains that equity investments like stocks historically have higher average returns of over 10% compared to debt investments like bonds that return 3-4%, but stocks are also more volatile. It defines risk as the variability of returns, and introduces the concepts of systematic risk that affects all stocks equally and unsystematic risk that is specific to individual stocks. Diversification can reduce unsystematic risk but not systematic risk. It also discusses measuring market risk through a stock's beta value, which represents its volatility relative to the overall market.
1. The document discusses portfolio selection using the Markowitz model.
2. The Markowitz model aims to find the optimal portfolio, which provides the highest return and lowest risk. It does this by analyzing different combinations of securities to identify efficient portfolios.
3. The document provides details on the tools and steps used in the Markowitz model for portfolio selection, including analyzing expected returns, variance, standard deviation, and coefficients of correlation between securities.
1) Total risk of a security is composed of systematic risk, which stems from external market factors, and unsystematic risk, which is specific to a company.
2) Diversifying a portfolio by holding many securities with returns that are not perfectly positively correlated can reduce total risk through lowering unsystematic risk exposure.
3) The degree of risk reduction from diversification depends on the correlation between the returns of the securities in the portfolio. Perfectly negatively correlated securities eliminate risk, while perfectly positively correlated securities do not allow for risk reduction through diversification.
The document summarizes the evolution of modern portfolio theory from its origins in Harry Markowitz's mean-variance model to subsequent developments like the Sharpe single-index model and CAPM. It discusses how Markowitz showed investors could maximize returns for a given risk level by holding efficient portfolios on the efficient frontier. The Sharpe model reduced the inputs needed for portfolio risk estimation by correlating assets to a market index rather than each other. CAPM then defined the market portfolio as the efficient portfolio and allowed a risk-free asset, changing the shape of the efficient frontier.
This document discusses estimating covariance matrices for portfolio selection. It introduces a shrinkage estimator that is an optimally weighted average of the sample covariance matrix and single-index covariance matrix. The empirical part compares these estimators to determine which produces the most efficient portfolio with smallest return variability. The sample covariance matrix has problems when the number of assets is large, as it has high variance and its inverse is a poor estimator. Shrinkage aims to improve upon the sample covariance matrix by combining it with a factor model-based estimator.
COVARIANCE ESTIMATION AND RELATED PROBLEMS IN PORTFOLIO OPTIMICruzIbarra161
COVARIANCE ESTIMATION AND RELATED PROBLEMS IN PORTFOLIO OPTIMIZATION
Ilya Pollak
Purdue University
School of Electrical and Computer Engineering
West Lafayette, IN 47907
USA
ABSTRACT
This overview paper reviews covariance estimation problems and re-
lated issues arising in the context of portfolio optimization. Given
several assets, a portfolio optimizer seeks to allocate a fixed amount
of capital among these assets so as to optimize some cost function.
For example, the classical Markowitz portfolio optimization frame-
work defines portfolio risk as the variance of the portfolio return,
and seeks an allocation which minimizes the risk subject to a target
expected return. If the mean return vector and the return covariance
matrix for the underlying assets are known, the Markowitz problem
has a closed-form solution.
In practice, however, the expected returns and the covariance
matrix of the returns are unknown and are therefore estimated from
historical data. This introduces several problems which render the
Markowitz theory impracticable in real portfolio management appli-
cations. This paper discusses these problems and reviews some of
the existing literature on methods for addressing them.
Index Terms— Covariance, estimation, portfolio, market, fi-
nance, Markowitz
1. INTRODUCTION
The return of a security between trading day t1 and trading day t2
is defined as the change in the closing price over this time period,
divided by the closing price on day t1. For example, the daily (i.e.,
one-day) return on trading day t is defined as (p(t)−p(t−1))/p(t−
1) where p(t) is the closing price on day t and p(t−1) is the closing
price on the previous trading day. Note that if t is a Monday or the
day after a holiday, the previous trading day will not be the same as
the previous calendar day.
Suppose an investment is made into N assets whose return vec-
tor is R, modeled as a random vector with expected return µ =
E[R] and covariance matrix Λ = E[(R − µ)(R − µ)T ]. In other
words, R = (R(1), . . . , R(N))T where R(n) is the return of the n-th
asset. It is assumed throughout the paper that the covariance matrix
Λ is invertible. This assumption is realistic, since it is quite unusual
in practice to have a set of assets whose linear combination has re-
turns exactly equal to zero. Even if an investment universe contained
such a set, the number of assets in the universe could be reduced to
eliminate the linear dependence and make the covariance matrix in-
vertible.
Out of these N assets, a portfolio is formed with allocation
weights w = (w(1), . . . , w(N))T . The n-th weight is defined as the
amount invested into the n-th asset, as a fraction of the overall invest-
ment into the portfolio: if the overall investment into the portfolio is
$D, and $D(n) is invested into the n-th asset, then w(n) = D(n)/D.
Therefore, by definition, the weights sum to one:
w
T
1 = 1, (1)
where 1 is an N -vector of ones. Note that some of the weights may
be negative, ...
1) The document introduces the stabilizer formalism for describing quantum error correction. The stabilizer formalism uses concepts from algebra to compactly describe quantum error detection and correction.
2) It provides background on quantum computation, including the mathematical formalism using tensor products, quantum states and state spaces, quantum gates, and measurement.
3) Any error on a quantum system can be described as a Pauli operation (X, Y, or Z), and the stabilizer formalism allows describing a quantum error correcting code in terms of the Pauli operators it detects and corrects.
Effective properties of composite materialsKartik_95
1. The document discusses effective material properties of fiber reinforced composites using micromechanics. It relates volume averaged stresses and strains in a representative volume element (RVE) to determine effective composite properties.
2. It presents equations for volume fractions and density of constituents using a "rule of mixtures" approach. Maximum theoretical fiber volume fractions are calculated for ideal square and triangular fiber arrays.
3. Elementary mechanics of materials models are used to predict longitudinal modulus, transverse modulus, Poisson's ratio and shear modulus of a fiber reinforced lamina. The models assume perfect bonding and uniform stresses/strains between fibers and matrix.
The document discusses the curse of dimensionality and principal component analysis (PCA). It explains that as more features are added to multivariate studies, the sample becomes less representative of the actual data due to the exponential increase in sampling needed with higher dimensions. PCA is introduced as a technique to reduce the dimensionality of data while preserving as much information as possible. It works by transforming the data to a new coordinate system where the greatest variance by each component is achieved. Examples are provided to illustrate PCA and how it identifies the components that capture the most information.
This document outlines how to conduct F-tests in regression analysis. It discusses testing linear restrictions by comparing restricted and unrestricted regression models using an F-statistic. Specific applications covered include testing for country effects, testing if coefficients are equal, and testing if coefficients sum to a given value through algebraic manipulation of the regression equations. Steps for performing F-tests are provided, including calculating restricted and unrestricted sum of squares, degrees of freedom, and determining significance.
This chapter introduces a simple representative agent macroeconomic model with a single consumer and firm. The consumer maximizes utility from consumption and leisure subject to a budget constraint. The firm maximizes profits by choosing optimal capital and labor inputs. A competitive equilibrium exists where markets clear, consumers optimize, and firms optimize given prices. The competitive equilibrium is also a Pareto optimum, so the model implies no role for government intervention. An example is provided using Cobb-Douglas production and constant relative risk aversion utility functions to illustrate the model.
This document discusses a study on expected shortfall as a risk measure in a multi-currency environment. It begins with an introduction and literature review on expected shortfall and value-at-risk as risk measures. It discusses the advantages of expected shortfall over value-at-risk, including that it is coherent and accounts for the magnitude of losses. The document then reviews literature on risk measures in multiple currencies and discusses how expected shortfall can vary depending on the currency used for aggregation. It proposes to model expected shortfall in multiple currencies and examine the implications of currency choice and composition of risk-free assets on risk measurement.
This document summarizes a machine learning homework assignment with 4 problems:
1) Probability questions about constructing random variables.
2) Questions about Poisson generalized linear models (GLM) including log-likelihood, prediction, and regularization.
3) Questions comparing square loss and logistic loss for an outlier point.
4) Questions about batch normalization including its effect on model expressivity and gradients.
In this paper, the black-litterman model is introduced to quantify investor’s views, then we expanded
the safety-first portfolio model under the case that the distribution of risk assets return is ambiguous. When
short-selling of risk-free assets is allowed, the model is transformed into a second-order cone optimization
problem with investor views. The ambiguity set parameters are calibrated through programming
These notes include some results from frame theory. These results are useful for sparse representations and compressive sensing theory. Errors and omissions are all mine.
- The Arrow-Pratt approximation for risk premium is only accurate for small risks, and deteriorates rapidly as risk size increases
- The paper proposes a new approximation based on "risk-neutral probabilities" that works better for both large and small risks
- For a two-dimensional example with logarithmic utility, the new approximation based on risk-neutral probabilities tracks the true risk premium value more closely than the Arrow-Pratt approximation, especially for larger risks
The document summarizes the capital asset pricing model (CAPM) and reviews early empirical tests of the model. It begins by outlining the logic and key assumptions of the CAPM, including that the market portfolio must be mean-variance efficient. However, empirical tests found problems with the CAPM's predictions about the relationship between expected returns and market betas. Specifically, cross-sectional regressions did not find intercepts equal to the risk-free rate or slopes equal to the expected market premium. To address measurement error, later tests examined portfolios rather than individual assets. In general, the early empirical evidence revealed shortcomings in the CAPM's ability to explain returns.
The document discusses probability-based approaches for calculating expected returns and variance under uncertainty. It provides an example using return data for a stock to calculate the expected return of 9.25% and variance of 0.02%. It also discusses how portfolio return and variance depends on asset weights, the individual asset expected returns and variances, and the correlation between the assets. Assuming the two example assets are perfectly negatively correlated, it calculates the asset weights needed for a zero risk portfolio and the expected return of that portfolio as 25.36%. Finally, it discusses limits to diversification in practice, such as the inability to hold all securities and that only unsystematic risk can be reduced through diversification.
"Correlated Volatility Shocks" by Dr. Xiao Qiao, Researcher at SummerHaven In...Quantopian
Commonality in idiosyncratic volatility cannot be completely explained by time-varying volatility. After removing the effects of time-varying volatility, idiosyncratic volatility innovations are still positively correlated. This result suggests correlated volatility shocks contribute to the comovement in idiosyncratic volatility.
Motivated by this fact, we propose the Dynamic Factor Correlation (DFC) model, which fits the data well and captures the cross-sectional correlations in idiosyncratic volatility innovations. We decompose the common factor in idiosyncratic volatility (CIV) of Herskovic et al. (2016) into the volatility innovation factor (VIN) and time-varying volatility factor (TVV). Whereas VIN is associated with strong variation in average returns, TVV is only weakly priced in the cross section
A strategy that takes a long position in the portfolio with the lowest VIN and TVV betas, and a short position in the portfolio with the highest VIN and TVV betas earns average returns of 8.0% per year.
This document discusses portfolio risk and return, including expected return, measures of risk like variance and standard deviation, and how diversification can reduce risk. It provides examples of calculating expected return, variance, and standard deviation for individual stocks and portfolios. It then introduces the Capital Asset Pricing Model (CAPM), which specifies the relationship between risk and required return of individual stocks based on the stock's beta. It provides examples of using the CAPM equation to calculate required return given beta and market factors, and calculating beta given expected return and market factors.
This document summarizes a research paper that proposes a new method for estimating the probability of informed trading (PIN) to address biases caused by floating point exceptions (FPEs). When estimating PIN using maximum likelihood estimation, large trade volumes can trigger FPEs that narrow the feasible parameter space and underestimate PIN. The authors develop an alternative likelihood formulation and conduct simulations showing their method produces less biased PIN estimates. Empirical tests on US stock market data from 2007 also indicate the new method better explains the relation between PIN and bid-ask spreads.
This document provides an introduction to complex numbers. It discusses the mathematical and geometrical requirements for representing complex numbers on a plane with real and imaginary axes. Some key points covered include: complex numbers can be used to solve quadratic equations with negative solutions; a complex number has both a real and imaginary part and can be represented as a point in the complex plane; and the angle of a complex number depends on its position in the complex plane relative to the real and imaginary axes. Several examples of representing and calculating angles of complex numbers are worked through.
A simple PCA model was used to find the direction of most variability for the CEF puzzle.
Evidence that the MOM factor as detailed by Carhart (1997) explains this puzzle was
found. Data sets used are available for independent verification of results.
The document discusses the curse of dimensionality when adding features to multivariate studies. It explains that the sampling density needed grows exponentially with the number of dimensions, so that most data points lie outside random samples as more variables are added. It also covers positive definite matrices, principal component analysis (PCA) for reducing dimensions while minimizing information loss, and how PCA works by finding new variables that maximize variance.
This document summarizes an academic paper about a "Guess the Number" game and its implications for financial markets. In the game, players guess a number between 0 and 100, and the winner is the one who guesses closest to 2/3 of the average number chosen. The equilibrium solution occurs when all players guess 0, but in reality outcomes depend on how deeply players think about what others will choose. An analysis of a newspaper competition showed the winning number was influenced by players trying to "move the market." The game demonstrates that rational behavior depends on anticipating what others think, and it can be rational to expect irrational outcomes like financial bubbles. Lord Keynes also discussed beauty contests and the idea that prices reflect anticipated averages of beliefs.
The document summarizes the Solnik-Sercu model for international diversification without and with stochastic inflation. It shows that without inflation, the optimal portfolio is the "log-portfolio" given by the inverse of the covariance matrix of asset returns. With stochastic inflation, the optimal portfolio contains an additional "inflation hedge" term that depends on the covariance between asset returns and inflation. The document outlines the derivation of the Adler-Dumas model, which correctly accounts for stochastic inflation in continuous time using Ito's lemma.
This document provides an introduction and overview of MATLAB. It discusses the main features and capabilities of MATLAB, including its strong linear algebra skills, interactive command line, scripting and programming abilities, graphical user interface, high quality graphs, and ability to run on various operating systems. It then provides instructions for getting started with MATLAB, including unpacking files, starting MATLAB, navigating directories, and running the first demo program. It also discusses key aspects of using MATLAB like the desktop environment, command window, help features, file types, scripts and functions, and setting the path.
The document discusses linear regression and maximum likelihood estimation in EViews. It provides an overview of the linear regression model and ordinary least squares (OLS) estimation. It then discusses how to perform OLS regression and test linear restrictions in EViews. Finally, it introduces maximum likelihood estimation and how it provides an alternative framework for estimation when the data is assumed to follow a particular probability distribution.
The document discusses fixed income products and derivatives. It begins with an agenda covering key concepts like no-arbitrage pricing. It then examines various fixed income products, including zero coupon bonds, coupon bonds, forward rates, floating rate bonds, and interest rate swaps. It also discusses yield-to-maturity as the internal rate of return for a bond, assuming reinvestment at the same rate. Throughout, it emphasizes the principle of no-arbitrage pricing and how derivatives can be replicated and priced using a portfolio of basic assets.
The document discusses the derivation and testing of the Capital Asset Pricing Model (CAPM). It begins by restating three key equations related to the CAPM. It then describes the assumptions and derivation of the CAPM, noting that the key insight is that the market portfolio is efficient. The document outlines how the CAPM makes testable predictions about asset expected returns and betas. It discusses additional assumptions required to test the CAPM using regression analysis. Specifically, it explains the Fama-MacBeth and Gibbons-Ross-Shanken (GRS) approaches to estimating the security market line implied by the CAPM using cross-sectional and time-series regressions respectively.
2. 1 Factor Models and Variance Decomposition
1.1 Notation
˜R =
˜R1
˜R2
...
˜RN
µ ≡ E ˜R =
E ˜R1
E ˜R2
...
E ˜RN
˜F =
˜F1
˜F2
...
˜FF
˜ε =
˜ε1
˜ε2
...
˜εN
Define the operator
Σ( ˜X) ≡ E ˜X − E ˜X ˜X − E ˜X = E ˜X ˜X − E ˜XE ˜X
yielding the Variance-Covariance matrix of the random vector ˜X.
1.2 Factor Model
The multifactor model can then be written as
˜R = α + B ˜F + ˜ε
where α is a row vector with typical element αi from the i-th asset factor model
˜Ri = αi + F
f=1 βf,i
˜Ff + εi and B is a F × N matrix of the factors’ regression
coefficients with typical element βf,i. If we stack the coefficients of each factor in a
row vector
βf = βf,1 βf,f · · · βf,N
2
3. we can write
B =
β1
β2
...
βF
Please note that – with an eye on the APT derivation – we have consciously chosen to
group the coefficients βf,i by factors not by securities. Hence our notation of a vector
βf does not correspond to the vector of regressions coefficients of a multivariate
regression like ˜y = Xβ + ˜ε. Such vectors would correspond to the columns, not the
rows of B.
The following remarks might be skipped at first reading: Alternatively, include
the 1 in the vector of factors, calling the augmented factor vector
˜F1 =
1
˜F
and correspondingly
B1 =
α
B
for the matrix of coefficients. and we can rewrite the factor model as:
˜R = B1
˜F1 + ˜ε
1.3 Variance Decomposition
The factor model regression imposes moment conditions,
E (˜ε) = E (1˜ε ) = 0
and
E ˜F ˜ε = 0
Or written as scalars: E (˜εi) = 0 ∀ i and E ˜εi
˜Ff = 0 ∀ i, f. From the moment
conditions it follows that the variance-covariance matrix of returns can be linearly
decomposed into two parts:
Σ ˜R = B Σ ˜F B + Σ (˜ε)
The first term in the summation represents the (co-)variation explained by the factors,
the second term stands for unexplained (co-)variation of security returns. Please note
that the same result is obtained using the alternative notation with B1 and F1 as
the first row and the first column of Σ (F1) consist only of zeros.
3
4. 1.4 Three Cases of Factor Model Restrictions
Until now, we have not imposed any assumptions on the covariance structure of re-
turns and factors. The variance decomposition described above holds by construction.
Factor models go further than that. They impose an additional assumption on the
covariance structure of residual returns Σ (˜ε). In general, there are three cases of
factor models:
1.4.1 Exact Factor Model
The variance-covariance matrix of security returns is fully explained by the factor
model. Each asset’s regression on the factors has an R2
of 1. This assumption is
highly unrealistic, but also highly instructive for the derivation of the APT. Actually,
it is the only instance where we can derive an exact pricing relation based on the
no-arbitrage condition.
Σ (˜ε) = 0
1.4.2 Strict Factor Model
The correlations between different securities are fully explained by the factor model.
The residual covariance matrix is a diagonal matrix:
Σ (˜ε) ==
σ2
ε1
0 · · · 0
0 σ2
ε2
0
...
...
...
0 · · · σ2
εN
where I stands for an identity matrix with appropriate dimensions. This is what Elton
and Gruber (1995, Chapters 7&8) call an “Index Model”.
Market Model is no Strict Factor Model Aggregating the Market Model
˜Ri = αi + βi
˜RM + εi
with the weights wM
i of the market portfolio
N
i=1
wM
i
˜Ri =
N
i=1
wM
i αi +
N
i=1
wM
i βM,i
˜RM +
N
i=1
wM
i εi
=
N
i=1
wM
i E ˜Ri −
N
i=1
wM
i βM,iE ˜RM +
N
i=1
wM
i βM,i
˜RM +
N
i=1
wM
i εi
= E ˜RM − 1 · E ˜RM + 1 · ˜RM +
N
i=1
wM
i εi
4
5. ⇔
N
i=1
wM
i εi = 0
shows that the Market Model cannot be an exact factor model as the last term implies
that the residuals are linearly dependent (meaning that they are correlated). See also
Fama (1973).
1.4.3 Approximate Factor Model
When the matrix Σ (˜ε) is “sparse”, meaning that its off-diagonal are “not important”
there might be chance that we get something called an approximate factor model. For
instance if there is some residual correlation between stocks belonging to the same
industry but no such correlation between stocks of different industries, the matrix of
residual returns’ covariances will have many zeros, meaning that it is sparse. Under
certain conditions, an approximate factor model can do the same job for arbitrage
pricing as a strict factor model. But we will not go into details here (and have
remained deliberately vague) . . .
1.5 Portfolio Notation
The factor model decomposes returns and their moments – in particular means and
(co-)variances – into two parts. One part explained from the factors and a second
part, the residuals. This carries over to portfolios of our securities. For the random
return on a portfolio with weights wp write
˜Rp = wp
˜R = wpα + wpB ˜F + wp ˜ε = wpB1
˜F1 + wp ˜ε
The expected portfolio return is
E ˜Rp = wpE ˜R = wpα + wpB E ˜F = wpB1 E ˜F1
and the portfolio’s variance (a scalar!) can be decomposed into
wpΣ ˜R wp = wpB Σ ˜F Bwp + wpΣ (˜ε) wp
2 No-Arbitrage Pricing
2.1 No-Arbitrage Conditions
No-arbitrage pricing is based on the condition that in the presence of non-satiated
investors (meaning that they will always prefer more to less)1
, there can be no arbi-
1
Please note that this does not imply any assumption on their risk-preferences!
5
6. trage opportunities. Formally, let us define an arbitrage portfolio2
with weights w0
by the properties of
zero cost:
w01 = 0
zero downside risk, which in the presence of unbounded return distributions3
trans-
lates to zero risk:
w0Σ ˜R w0 = 0
Colloquially, an arbitrage opportunity can be described as “something for noth-
ing”. In our case it would mean that our arbitrage portfolio has a positive expected
return w0E ˜R . Please note that in the case of w0E ˜R < 0, an arbitrage op-
portunity would arise from shorting our initial portfolio, leading to a portfolio with
weights −w0. It follows the condition of no-arbitrage (NA)
zero arbitrage return:
w0E ˜R = 0
2.2 Pricing Restriction
Using the factor model’s variance decomposition, we can rewrite the zero-risk condi-
tion as
w0Σ ˜R w0 = w0B Σ ˜F Bw0 + w0Σ (˜ε) w0 = 0
2.2.1 Exact Factor Pricing
Under an exact model there is Σ (˜ε) = 0 and the zero risk condition simplifies to
w0B Σ ˜F Bw0 = 0
In order to achieve this condition, we need
w0B = w0β1 · · · w0βF = 0 · · · 0 = 0
We see, that the three NA conditions impose orthogonality restrictions between
w0 and each of the vectors 1, βf (∀ f = 1 . . . F) and µ. The APT relationship (see
2
Please note that Ingersoll (1987) defines an arbitrage opportunity as being a zero-investment
portfolio only – without regard for its riskiness.
3
In particular if they are not bounded to be positive.
6
7. below) relies on the fact4
that the three NA conditions imply that the vector µ lies
in the (hyper-)plane spanned by the other vectors (1, βf ∀ f = 1 . . . F) which are
also orthogonal to w0 – see below for an illustration with N = 3 and F = 1. Hence
the vector of expected returns can be written as a linear combination of the vector
of ones and each of the factor coefficients vectors βf :
µ = λ01 +
F
f=1
λf βf = λ01 + Bλ = B1λ1
where the vectors λ and λ1 are obviously defined by
λ1 = λ0 λ = λ0 λ1 · · · λF
The orthogonality restrictions and the ensuing APT equation can easily be illus-
trated for N = 3 and F = 1. See figure 1. In the general case, that is for any N < F,
we can proof5
that the APT equation holds when we project the vector of expected
returns on the vector of ones and each of the βf ’s (Connor and Korajczyk 1995).
That means we run a cross-sectional regression of expected returns on the betas
including an intercept term and call the regression coefficients λi (∀ i = 0 . . . N):
µ = λ01 +
F
f=1
λf βf + η
Obviously, the only difference with the APT equation is that we allow for residuals η.
By the moment conditions of the projection (or: regression) we see that η has the
properties of an arbitrage portfolio’s weights:
1 η = 0
βf η = 0 ∀ f = 1 . . . F
Its expected return equals
η µ = η η
which must be zero by the NA condition. This is only achieved if η = 0, which proves
the APT equation.
4
The mathematics behind this result are based on the “Fundamental Theorem of Linear Algebra”.
This theorem is concerned with the solutions x to Ax = 0, a.k.a. the “nullspace” of A. In our
case A = µ 1 B is a (F + 2) × N matrix and the theorem tells us that its nullspace has
dimension N − F − 2 while the nullspace of 1 B has dimension N − F − 2. Hence it follows
that µ, who lies by construction not in the nullspace of A lies already in the space spanned by the
columns of 1 B .
5
Essentially, this is an illustration of the workings of the fundamental theorem of linear algebra.
7
9. List of Figures
1 Orthogonality restrictions of the APT for N = 3 and F = 1 . . . . . 8
References
Connor, Gregory, and Robert A. Korajczyk. 1995. “The Arbitrage Pricing Theory
and Multifactor Models of Asset Returns.” Chapter 4 of Finance, edited by R.A.
Jarrow, V. Maksimovic, and W.T. Ziemba, Volume 9 of Handbooks in Operations
Research and Management Science, 87 – 144. Amsterdam: North-Holland.
Elton, Edwin J., and Martin J. Gruber. 1995. Modern Portfolio Theory and Invest-
ment Analysis. 5th edition. New York, NY: John Wiley & Sons, Inc.
Fama, Eugene F. 1973. “A Note on the Market Model and the Two-Parameter
Model.” The Journal of Finance 28 (5): 1181 – 1185 (December).
Ingersoll, Jonathan E., Jr. 1987. Theory of Financial Decision Making. Studies in
Financial Economics. Savage, MD: Roman & Littlefield.
9