Iain and Gareth.. Thanks again for asking me along today... I’d like to run through the presentation which should take no more than 20 mins. Undoubtedly you’ll have questions BUT ask that we cover at the end to ensure we get through in time. So – the Dilemma: “Ensuring funds are doing what they state within their published objectives? Do funds do what they say? And how this varies by asset class..?
As fund analysts we are often derided as the ‘blunt end of the stick’ when it comes to the tricky numbers. In the last decade we have seen an explosion in models to calculate the risk of financial instruments. One might then assume that governance is somehow all about numbers? 2 things are true of these advances in financial analysis : 1) they became more complex and mathematical and 2) they failed to reduce the level of risk for the investor: think LTCM, Black Monday ( 19/10/1987) , Barings-Asia crisis (97), dot-com (2000) and the credit crunch (2008). Common to all of these systems is that they miss-out morale hazzard! In 2008 the Bernie Madoff’s Ponzi scheme shook many a Fund of Hedge Fund manager, PhD think tank and investor alike. An extreme example perhaps but we can go back to Keynes after the 1930s crash – or Albert Einstein for that matter, their views holds true today. There is no holy grail - Fund Governance is NOT about predicting the future! 1950s: Optimization (Markowitz) 1960s: Capital Asset Pricing (Treynor, Sharpe etc) 1970s: Attribution ( SIA UK, *****) 1970s: Arbitrage Pricing Theory (Ross) 1980s: Heuristics and behavior (Kahneman & Tversky) 1990s: Stochastic (Wiener, Black, Scholes, Merton) 1990s: Rise of Value at Risk based models (VaR) 2000s: Asset-liability strategies (ALM, aka LDI) 2000s: Fluid dynamic models (Navier-Stokes) 2000s: ARCH-based models (Engle) 2000s: Levy-jump models and power laws (LSE) 2000s: Complex adaptive, chaos theory, entropy (Lorenz) 2000s: Extreme Value models, organic ()
It’s about Treating Customers Fairly and building in resilience to protect them from the uncertain. TCF guidance gives us a framework to do this: TCF Outcome 5 – do the funds perform as expected? From which there are risks facing the investor: Classification limits – these set maxims for non-core exposure: positioning of funds is crucial to the firm – it can be the difference between ranking in the top 75th percentile or down in the 25th.. Industry classifications often change infrequently and investment managers often perceive them out of date – an example being IMA’s slow reaction to UCITS3, absolute return, 1xo/xo and cash enhanced funds. Differing expectations: Investor (pre-sale), marketeer (point-of-sale), investment manager (post-sale) The benchmark paradox: often set by marketing to anchor the familiairity of a new fund with peers – often investment managers are grudgingly persuaded into accepting benchmarks rather than a true proxy of their strategy/approach Changes in risk at portfolio level – often a fund can materially change but that risk is not obvious from the performance – the investor’s dilemma is that their portfolio could be less diversified or higher risk going forward. Mandate limits: A key area for governance. It’ll state the manager’s objective (income/growth) where he/she intends to buy, where, when, what. Whether the fund will be geared, use of derivatives, investment horizon, benchmark, tracking error All of these things could mislead the investor and subsequent performance far from ‘expected’.
As a result supervisors have recognised the increasing pressure on fund managers to deviate from the investment objective.. And the rising importance for investors to know what they are buying.. We can simplify into 2 types: Objective-based: Deviation by investing outside of the fund’s guidelines: Drift: changing process (E.g. Adding derivatives when no provision in the prospectus, a shift from buying growth stocks to buying discounted recovery stock). I-Grade bond fund starts to buy sub-BBB grade, deviating away from a common fund family? Objectives: The mandate states what the fund will do and easy for the client to understand to make an informed decision. Fund managers and legal teams will endeavour to write-in more latitude to protect themselves from censure. This creates ambiguity for the investor. Tracking Errors are common place among investment objectives – designed to reduce excess deviation risk; at the sake of sector concentration. In 2006 many funds had Rsquares >90% to the S&P500 and historically low tracking errors on modern record.. Managers were compelled to follow the herd and fell at once. Systemic Based: Morale hazard and unusual investing behaviours resulting from peer pressures: Rising risk appetite; falling risk awareness – Back in early 2007 no one wanted to talk about risk because all the standard time periods showed everyone was up – Often when managers are busy fighting over 50bps, to be top quartile, controls are relaxed and risks taken. Herding behaviours: Have managers deviated from their normal process to partake in an asset bubble, or to immunise from a beta driven run. Yrs ago Ian Woodford talked about almost losing his job at Invesco because he didn’t partake in the ‘dot.com’ bubble. Transparency: Has the fund unexpectedly begun to use structured products, hedge funds, derivatives, OTC deals? How quantifable is the net exposure to the market, is the fund leveraged? Relative performance: Is the fund manager ‘window dressing’ by selling cash to boost end of quarter numbers? Is a bond fund stripping the fund to boost yield, is an equity manager using high rate overnight loans with his slush fund to gross-up his quoted alpha? Classification boundaries (IMA): Often managers will attempt to hold more than permitted to generate false alpha against the benchmark. Going back to around 2003/2004 - both Tony Bolton and Andrew Green were slapped by the IMA for holding more non-UK equity that permitted.
These hazards demand a framework to measure different risks for each asset class.. This table is sth of a simplification but serves to point out one thing – some risks differ by asset class; some are more systemic. In 2008 my then Chief Administration Officer asked me to take on the TCF project, set up a ‘health monitoring’ framework – the next few slides summarises some of that work. By November 2008 when the first set of ARROW visits were expected we had a fully up and running framework covering our OEIC range with SICAV in-development. More importantly I was also able to integrate my work in TCF with S&P ratings and prospectus and product development projects. I refer back to TCF: That the performance of the fund is 1) in line with the investor’s expectations, 2) consistent with the way it was sold and 3) as per the mandate limits in the fund prospectus. The reason for 3 tests – because these factors can vary independently of each other. Using a Traffic Light system of Green, Amber, Red – a common standard used in MI, both in TCF and risk management. Results and exceptions can be quickly recorded for the TCF Scorecard and back through to the ‘risk log’.
To start we must outline the ‘expected behaviours’ of different asset classes, fund sectors and funds therein.. This is the Business as Usual – we expect different performance patterns because.. We back-test the histogram of each IMA sector (over say 10 years).. Anywhere between 3-5 cycles should represents a long-term history (‘the law of large numbers’).. The type of distribution typifies the volatility, range of return and tail risk we can expect. This shouldn’t be confused with forecasting as all volatility recorded is historical (ex-post). We are simply setting a baseline to track unexpected outcomes BUT could be used for ex-ante stress/scenario purposes later on. A quick run through the types of distribution – ‘Positive’ as it suggests is usually a zero loss distribution with a decreasing right tail of positive returns. Cash and G’teed products should be found here (and I stress the word ‘should’) Funds with normally fat tails usually experience more frequent but smaller losses: they are known as Uniform Distribution (Sometimes known as a Leptokurtic pattern). Gilt & Bond Funds often display this type.. Funds with narrow-long left-tails are known as Platykurtic. Often there is a high concentration of returns around the Mean in the higher return ranges. However to the left there is then a long series of acute negative returns - this is the left-tail. They include Higher risk funds (E.g. Emerging Markets) and Funds with low correlated returns to core markets such as REITs and High Yield Bonds.. Some funds even have larger discontinuities with their historical trend E.g. commodity funds due to the way commodities are traded. We expect funds such as core equity and balanced managed funds to typify a bell-shaped distribution but in reality they are often atypical, perfect bell curves rarely exist in financial markets; yet perhaps underpin 90% of all analysis.
So that’s the expected outcomes but what about unexpected – for these we need Key Risk Indicators by distribution type; by asset class; by sector. Scored logically back through a common system. I have noted just 4 sector examples that you could find in the IMA today. We set KRIs back against the mandate limits of each Fund or proxy at sector level (E.g. IMA classification limits). However there are asset classes which are difficult to assess due to poor transparency. Structured products face this problem – they have an extra layer of counter-party risk to contend with that is largely invisible to the investor. The risk hasn’t been removed it has been moved. They have sold well, especially in Europe, E.g. Holland’s NL-AFM GUISE favours funds with safety nets). However in 2008 there was a global move away from ‘black box’ strategies (much to the pain of some private banks such as UBS). That same problem continues for the fund analyst – frankly any fund that uses a broad opportunity set of default swaps, futures, collateralised debt is bloody hard to measure.. The general accepted (CAIA) is to use Black-Scholes type methods to anticipate the future cash flows. However in truth it’s far less complicated to track for material changes fund-level and the output returns. Besides such funds are adequately tagged as ‘complex’, the risks well discussed (for hedge if not structured). If we think of GARS the its problem is also a great story – it hasn’t suffered a downturn common to peers. Kate Hollis @ S&P will quickly tell you – absolute return funds have been found wanting before. Internally this leaves a ? over GARS long-term combined with exceptional market inflows. We can’t build on past-downside – so we need to use (potentially inferior) peer funds or IMA Absolute Return average. The problem with this sector is one of consistency – funds are almost as varied as the hedge industry. At EFAMA in 2008 we did set down better lines but the IMA has been slow to move.
And how might then unexpected behaviours emerge? I propose when tracking large sectors that we focus on the outlier funds (upside/downside).. Where a fund spikes away from its peer group then we focus our attention for deeper analysis. During my time in answering RFP and platform queries– I found attribution analysis was not infallible, no matter whether it was additive or geometric its main problem was the use of aggregate time periods (thank GIPS and MiFID) – standard time period reporting smoothes unexpected behaviours back against the mean value. In other words the key ‘effects’ can be lost or diminished by the averages. Each analyst should therefore take into account: Dispersion periods of interest – run specific attribution queries (daily where possible, Barra/FactSet cater for this). If no attribution software is available then you can manually check for the following: Check changes in the portfolio holdings during the period, failing full holdings data then check the top 10 weights or RFP for top active weights (+ve/-ve). Examine against fund manager commentary: Check for differences in sector weights, geographical positions, concentration of holdings etc. Where a fund is concentrated in specific industries then use proxy indices to compare the performance of say FTSE100 Financials vs. Transport. Compare Market Cap to peer average/median, check usual average book values. For a bond fund then check the relative avg credit quality, duration, YTM and types of holdings: currency, treasury, how much in Corporates, in which countries, i-grade, HY etc. Attribution in fixed income funds is akin to hitting a moving target from a flying object: no 2 bond issues are exactly the same: different maturity, different coupon, price, duration, spread (premium). Bond indices like the Barclays (Lehman) ‘Multiverse’ are huge, the number of positions in a bond fund is also great. Companies like FactSet are trying to develop better attribution techniques but they will always ignore that the bonds markets are also traded OTC (syndicated placings) as well as on the secondary markets.
This leads us to the output: with a framework we know, in simple terms, the risks inherent to each fund type: We identify both the expected and potential unexpected behaviours. Use of logic grids help breakdown analysis into a sequence of questions and answers – these grids are commonly used in risk management Prompting simple Yes/No answers, arranged around specific KRIs: question: answer: question. It’s transparent and provides easy audit trail. Results can be easily converted into a simple score for comparison with other funds/sectors and asset classes. From a risk-mgt pov: Scores can be considered as ‘probability’ while the tracked Value at Risk could easily provide the ‘Impact’ value. This has additional benefits for any risk mgrs wanting to gauge the risk profile with the group’s risk appetite. Measures such as Value at Risk can be aggregated up to sector, asset and overall level. For contrast VaR can be measured over very short-term periods (1d, 5d, 20d) or more crudely over 250d or 36mo time periods.. We track the mean/median scores across all sectors and set thresholds to report against. We track our performance and review the model when needed.
Going back to the original question: How can we “Ensure funds are doing what they state within their published objectives? I propose we can do this through: A practical framework: we should stop chasing the complex, track actual events and build for resilience. We do this by setting specific Key Risk Indicators by asset class; common thresholds for reporting and escalation. By being self-critical: correct where our system didn’t work and highlight where it did, make changes where and when required. A team approach: develop a collegiate approach to research, share, use devil’s advocate discussions. Weekly minuted team meetings are crucial. Adoption and rotation of sector coverage: Allow team members to become experts while rotating every 6-12mo helps cover key-person planning. 5 analysts would allow coverage to be broken up by: Equity, Bond, Money Market, Alternative (inc. GARS) and Specialist (Tech, RE etc). Whereas; unlike JRG where I managed the # of funds in each sector based on the house asset allocation, here we can apply resources based on sales flow activity and AUA held. Communication: Lastly it’s critical that we inform the business of our progress via internal bulletins, dashboards. If leaders and investors cannot benefit from what we do then we have failed to mitigate those potential risks. We can do this by informing the relevant decision-makers, risk committees in a structured way. To close – it’s about the big picture: map and report overall VaR, aggregate Key Risk scores and flag broad asset and sector movements. I’m sure there will be areas I haven’t had to time to cover – so please ask away.. Thank you.
Funds are scored by comparing expected vs. unexpected outcomes, these should be tangible risks that are not overly-reliant on stats. The logic test , based on the particular set of KRIs, consists of questions ordered methodically from bottom right (green) through to top left (red). Applying both quantitative data and qualitative information such as sales flow data and material changes effecting Fund (E.g. change of lead manager, change in charges, investment objectives). The logic test produces yes (Y) or no (N) outcomes. A ‘yes’ response then escalates the test to the next question. Each ‘yes’ response is = to 1 point. Funds are rated on a color coded (Green, Amber, and Red) number scale of 1 to 12 (1= most healthy, 12= least healthy). GREEN - Business as Usual (no tests are negative) No further action taken other than regular monitoring as part of the Health Monitor. This means we would do very little else outside the Health Monitor to track the Fund with low levels of discussion on the results. AMBER - Alert KRIs for Issues (one test may be negative) The Fund will be alerted and any systemic issues discussed but no immediate actions taken. Fund/sector is flagged to the HO Governance to examine the fund more closely and assess what/if steps or discussions are required. This will initially involve an internal discussion regarding performance of a fund vs. marketing material and positioning. Based on this discussion the analyst may approach the ratings agency or fund manager for further information and assess responses. These discussions may or may not lead to any actions depending on the assessment against investor expectations, investment guidelines, and the basis on which the product was sold. RED - Action for Potential Remedy (one or more tests may be negative) Exception reported to HO Governance and upwards to discuss and address specific issues with PMs. Review recent portfolio activity with the possibility of changing marketing materials and sales positioning. Sales & Marketing will be notified on the key outcomes of discussions regarding the Red rating, along with any actions and decisions that will impact the future of Sales & Marketing. These discussions may or may not lead to any actions depending on the assessment against investor expectations, investment guidelines, and the basis on which the product was sold.
The model would be tracked over time and thresholds adjusted when needed. This model is cautious in that a fund is more likely to be flagged for alert (creating discussion). The threshold for action or BAU could be modified – once set up these thresholds should be only adjusted based on performance and not over-influenced by the business-side. The overall score is the mean of all the results rounded to the nearest whole number. Deviations from the mean are tracked on a month-to-month basis. Mode and median can also be used in conjunction. The scores are recorded with the results aggregated up to the score card as a histogram (scale (x), frequency (y)).
Investment horizon: An investor measures the return from Buy to Sell; the investment manager measures based on trades and against peers over standard time periods. This chart shows the waterlines of 3 funds over 25 investment horizons as well as the total inflows and outflows corresponding. I ran this analyses for some funds at Franklins a refresh of the Magellan study by Fidelity in 2005, which showed the fund made a CAGR of 15% but the average investor made only 5%. It shows that the return of each fund varied greatly over the different horizons. How a Fund is sold often sets expectations of how it might return over the short, medium and long-term. New launch funds are particularly sensitive to large inflows/outflows until their fund size grows. For governance it’s vital that the investment manager does not compromise the investor by managing the portfolio to deliberately return over a time period that’s not consistent or changing the return characteristics (horizon) of the Fund.
Gathering long-term histograms of different sectors helps us set the expected range of return, central tendency and distribution.
This map was prepared weekly for the German Sales Director, Frankfurt, in 2008. It tracks 20d VaR against 36mo VaR and movements therein. Originally it also showed the Bull-Bear capture ratio but can be used to show a KRI Score. Each cell has an underlying data sheet of chosen metrics and specified competitors to allow deeper analysis. Use of T+1 to Friday close data, supplied to Sales Monday. Use of Lipper automated tables.
Governance Interview (proposal Mar2010)
‘ Do funds do what they say?’ My proposal for fund governance across different asset classes.. March 2010 Jon Beckett, ASCI
Chasing a Holy Grail? "Any intelligent fool can make things bigger and more complex... It takes a touch of genius - and a lot of courage to move in the opposite direction." Albert Einstein Governance: Do Funds do what they say? Simple Complex ""... a speculator is one who runs risks of which he is aware and an investor is one who runs risks of which he is unaware.“ John Maynard Keynes
Treating Customers Fairly “ Consumers are provided with products that perform as firms have led them to expect and the associated service is both of an acceptable standard and as they have been led to expect..” ‘ Treating Customers Fairly: Measuring Outcomes’ Outcome 5: FSA Progress Update June 2008 Governance: Do Funds do what they say? TCF #5
Morale hazards “ There are 2 kinds of rugby players - there's the honest ones and then the rest.” Jim Telfer, 1997 Lions tour Governance: Do Funds do what they say?
Building a Framework <ul><li>TCF Outcome 5 </li></ul><ul><li>Products not performing in line with: </li></ul><ul><li>Investor expectations </li></ul><ul><li>Investment guidelines </li></ul><ul><li>On the basis sold </li></ul>Governance: Do Funds do what they say? Sector Volatility FOREX Credit & Issuer Market (Beta) Inflation & Interest Counter-party Liquidity Cash x x x x Structured x x x x Bond x x x x x Equity x x x x Commodity x x x x Property x x x x x x Derivative x x x x x x
Expected behaviours? Governance: Do Funds do what they say?
Unexpected behaviours? Positive Distribution – Cash Fund (R) Change in sector classification to riskier peer group. (R) Presence of non-Cash, CDO, CMBS, ABS etc (A) Nominal Return below LIBOR, poor Real Rate of Return (A) Excessive positive returns (E..g +100bps >LIBOR) (A) Use of non-deposit short-term paper, gilts (A) Monthly returns dip below LIBOR/SONIA (A) Redemptions or Falling assets (A) Rising Volatility of returns Uniform Distribution – Gilt or Bond Fund ( R) Sovereign downgrade of core holding(s) (R) Heavy use of CDS and derivatives (R) Rising Correlation to Equities and HY (A) Multiple netting agreements, strips etc (A) Long and rising Duration/Maturity (A) Falling Yields/Narrowing Spreads (A) Growing negative skewness; rising Kurtosis (A) Tracking Error above benchmark limits Normal Distribution – UK Eq Income Fund (R) Maximum Loss in excess of expected (R) Large reduction in yield pay-out (A) High Cash (close or above IMA limits) (A) High P/Earnings, Low or falling P/Cash Flow (A) High 12mo portfolio turnover relative to peers (A) Change in mandate to allow non-Equities (A) High non-UK exposure (A) Downgrade of FMR from S&P Left-Tail Distribution – EM Equity Fund (R) High concentration in single country (E.g. China) (R) High XS Value at Risk, frequent violations (R) Fund manager change, high team turnover (A) Rating downgrade of core countries held (A) Large swings in quartile deviation (A) Excessive Liquidity into/out of Sector (A) Significant dispersions away from sector (A) High TER, non-performance linked Distinct sets of KRIs per category! Bell Uniform Left-tail Governance: Do Funds do what they say?
Dispersion - Attribution Conventional attribution: standard time period average. Governance: Do Funds do what they say? <ul><li>Targeted attribution: standard time period to date of request. Check for changes: E.g. </li></ul><ul><li>Mandate or benchmark </li></ul><ul><li>Portfolio holdings </li></ul><ul><li>Percentile ranking </li></ul><ul><li>Quartile deviation, inter-quartile range </li></ul><ul><li>Tracking Error (R 2) </li></ul>“ Only takes one tree, to make a thousand matches Only takes one match, to burn a thousand trees” Stereophonics, 1997
MI & Analysis <ul><li>Identify risk: build framework </li></ul><ul><li>Baseline expected behaviours </li></ul><ul><li>State unexpected behaviours (KRIs) </li></ul><ul><li>Methodical flow of questions </li></ul><ul><li>Common score across Funds </li></ul><ul><li>Thresholds and escalation </li></ul>Governance: Do Funds do what they say? The KRI score is directly linked to the number of ‘Yes’ responses in the logic test: The test is summarised as a score, as shown above (6 ambers + 2 greens + 0 reds = total score of 8):
Appendices 1. Escalation 2. Thresholds 3. Holding Periods 4. Histograms 5. Tracking Maps Summary Governance: Do Funds do what they say?
Escalation: (R) Red (Action to remedy) (A) Amber (Alert for issues) (G) Green (Business as usual) Appendix 1 Governance: Do Funds do what they say? (R) (R) (A) (A) G (A) (G) (G) (A) (A) (A) (A) Q. Maximum Losses and/or Drawdowns rising above expected outcomes and previous water lines? Yes/ No Q. VaR greater than expected for strategy, sector? Yes/ No Q. Rising Negative Skewness, Volatility or other downside indicators? Yes /No Q. Redemptions in excess of 10% of tna; within 3months, or coupled to zero/flat gross sales? Yes/ No Q. Unusual Correlation to Benchmark/ peers, Attribution or risk factors (E.g. Barra) Yes /No Q. VaR continuing to rise, risk of violations above permitted number for model? Yes/ No Q. Any unexpected turnover, cash, concentrations or portfolio indicators? Yes/ No Q. Any unexpected drop off in performance or rising volatility? Yes/No Q. Recent material changes in the Fund? Yes /No Q. Unusual sector flows/redemptions (E.g. herding, momentum or retraction)? Yes /No (A) Alert for systemic issues Business as usual Actions for Remedy Q. Excess redemptions or sales volatility over sector peers? Yes /No Q. Do redemptions appear to have impacted falling performance? Yes/ No 12 (R) 11 (R) 10 (A) 9 (A) 8 (A) 7 (A) 6 (A) 4 (A) 2 (G) 5 (A) 3 (A) 1 (G)
MI: Managing Thresholds Appendix 2 Governance: Do Funds do what they say?
Appendix 3 Holding Periods Governance: Do Funds do what they say?
Expected Behaviours: Histograms (IMA) IMA UK Index-Linked Gilt IMA Global Emerging Mkts IMA North America IMA Global Growth Appendix 4 Governance: Do Funds do what they say?
Reporting: Seeing the big picture at a glance.. Appendix 5 Governance: Do Funds do what they say?