These Lecture series are relating the use R language software, its interface and functions required to evaluate financial risk models. Furthermore, R software applications relating financial market data, measuring risk, modern portfolio theory, risk modeling relating returns generalized hyperbolic and lambda distributions, Value at Risk (VaR) modelling, extreme value methods and models, the class of ARCH models, GARCH risk models and portfolio optimization approaches.
2. 2
TOPICS OF CHAPTER NO. 10
In this lecture, we will cover the following topics:
10. Robust Portfolio Optimization
i. Overview
ii. Robust Statistics
a. Motivation
b. Selected Robust Estimators
iii. Robust Optimization
a. Motivation
b. Uncertainty Sets and Problem Formulation
iv. Synopsis of R packages
a. The package covRobust
b. The package fPortfolio
c. The package MASS
d. The package robustbase
3. 3
TOPICS OF CHAPTER NO. 10
e. The package robust
f. The package rrcov
g. The package SOCPs
h. The package Rsocp
v. Empirical Applications
a. Portfolio Simulation: robust versus classical statistics
b. R code 10.1 Portfolio simulation: data generation
c. R code 10.2 Portfolio simulation: function for
estimating moments.
d. R code 10.3 Portfolio simulation: estimates for data
processes
e. R code 10.4 Portfolio simulation: minimum-variance
optimisations
4. 4
TOPICS OF CHAPTER NO. 10
VI. Portfolio back test: robust versus classical statistics
a. R code 10.5 Portfolio back-test: descriptive statistics of
returns
b. R code 10.6 Portfolio back test: rolling window
optimization
c. R code 10.7 Robust portfolio optimization with
elliptical uncertainty
d. R code 10.8 Efficient frontiers for mean-variance and
robust counterpart optimization with elliptical
uncertainty of 𝝁
e. R code 10.9 Determining equivalent mean-variance
allocation for a given robust counterpart risk weighting
f. R code 10.10 Graphical display of efficient frontier for
mean-variance and robust counterpart portfolios
5. 5
CHAPTER OVERVIEW
The use of sample estimators for the expected
returns and the covariance matrix can result in sub-
optimal portfolio results due to estimation error.
Furthermore, extreme portfolio weights and/or
erratic swings in the asset-mix are commonly
observed in ex post simulations.
Minimum-variance portfolios are advocated
compared to mean-variance portfolios (see the
references in Chapter 5).
6. 6
CHAPTER OVERVIEW
It would therefore be desirable to have estimators
available which lessen the impact of outliers and thus
produce estimates that are representative of the bulk
of sample data, and/or optimization techniques that
incorporate estimations errors directly.
The former can be achieved by utilizing robust
statistics and the latter by employing robust
optimization techniques.
7. 7
CHAPTER OVERVIEW
The chapter concludes with empirical applications
in the form of a Monte Carlo simulation and back-
test comparisons, where these robust portfolio
optimizations are compared to portfolio solutions
based on ordinary sample estimators.
8. 8
ROBUST STATISTICS
Motivation: It has already been pointed out in Chapter 3
that the normality assumption quite often does not hold
for financial market return data.
The violation of this assumption is justified on empirical
grounds by the stylized facts for single and multivariate
returns.
But it was also shown in Chapter 6 that the normality
assumption is violated to a lesser extent with the lower
the data frequency.
9. 9
ROBUST STATISTICS
The arithmetic mean, as an estimator for the location of
a population is sensitive to extreme observations, such
that the estimate does not reflect the bulk of the data
well.
On a similar note, the dependence between two random
variables can be highly distorted by a single outlying
data pair.
In light of this, it would be desirable to have recourse to
methods and techniques that are relatively immune to
such outliers and/or to violations of the underlying
model assumptions.
10. 10
ROBUST STATISTICS
The field of robust statistics deals with problems of this
kind and offers solutions in the form of robust estimators
and inference based upon these.
Formerly, the outlier problem sketched above would be
resolved by means of trimming (removing of outliers) or
winsorizing (equalizing extreme observations to a fixed
quantile value).
Indeed, both methods can be considered as means of
robustification.
11. 11
ROBUST STATISTICS
It is worth mentioning that so far the term “outlier” or
“extreme observation” has not been precisely defined
earlier.
The reason for this is simple: there is no clear-cut way to
assess whether a data point is an outlier or not.
The question is always a relative one and crucially
depends on the underlying model/distribution assumption.
For example, given the standard normal distribution and a
sample observation of 5, one could surely classify this
data point as an outlier.
12. 12
ROBUST STATISTICS
Selected robust estimators: The most commonly
utilized measure for assessing the robustness of an
estimator is the breakdown point (BP).
This measure is defined as the relative share of
outliers in a sample such that the estimator does not
take an arbitrary large value.
By definition, the BP can take values between 0 and
0.5.
13. 13
ROBUST STATISTICS
The arithmetic mean has a BP of 0, because if a
single observation is replaced by one value, the
location estimate can be made arbitrarily high.
The upper bound of the BP is explained by the fact
that if more than half of the observations are outliers,
the sample is falsified to a degree such that no
inference with respect to the population can be drawn
from it.
14. 14
ROBUST STATISTICS
A further criterion for assessing the appropriateness of a
robust estimator is the relative efficiency (RE).
Here, the asymptotic variance of a robust estimator is
expressed relative to the variance of an optimal estimator
which has been derived under strict adherence to the
model/distribution assumption.
As such, it can be interpreted as a percentage figure,
indicating by how much the sample size has to be
increased such that the variances of the two estimators are
equalized.
15. 15
ROBUST STATISTICS
Class of M and MM estimators: As early as 1964
the class of M-estimators was introduced by Huber.
The class name should indicate the resemblance of
this estimator to the ML principle (see Huber 1964,
1981).
The unknown parameters 𝜃, are determined such
that they have most likely produced a given iid
sample.
17. 17
ROBUST STATISTICS
The function 𝜌(⋅) must meet the requirements of symmetry,
positive definiteness, and a global minimum at zero.
Of course, the function should provide decent estimates when
the model/distribution assumptions are met and not be
negatively affected by instances of violation.
The difference between the robust forms of the M-estimators
and those of the ML and LS principles is in the specification of
𝜌(⋅).
For the former estimators extreme data points obtain a smaller
weight and are thus less influential with respect to the
parameter estimates.
20. 20
ROBUST STATISTICS
If one applied this dispersion estimator directly to the data pairs of
X, the resulting variancecovariance matrix would cease to be
positive definite. For this reason, Maronna and Zamar (2002)
proposed an orthogonalization of X and hence the estimator is
termed “orthogonalized Gnanadesikan–Kettenring” (OGK).
21. 21
ROBUST OPTIMIZATION
Motivation: The term “robust” is now defined as an
optimization technique that will produce a solution which is
not negatively impacted by an alternative parameter
specification.
For example, if the return expectations are turning out to be
less favorable.
Incidentally, robust optimization techniques differ from
stochastic optimization in the sense that the latter are based on
a specific distribution assumption for the parameters.
In a nutshell, the aim of robust optimization is the derivation of
an optimal solution for sets of possible parameter
constellations.
22. 22
ROBUST OPTIMIZATION
Uncertainty sets and problem formulation: The
concept of robust optimization will now be elucidated
for mean-variance portfolios, although the approach is
also applicable to other kinds of optimization.
The classical portfolio optimization is given by;