2011 02-04 - d sallier - prévision probabiliste
Upcoming SlideShare
Loading in...5
×
 

2011 02-04 - d sallier - prévision probabiliste

on

  • 1,361 views

 

Statistics

Views

Total Views
1,361
Views on SlideShare
868
Embed Views
493

Actions

Likes
0
Downloads
9
Comments
0

9 Embeds 493

http://previsions.blogspot.com 307
http://previsions.blogspot.fr 170
http://previsions.blogspot.ca 5
http://www.previsions.blogspot.com 4
http://previsions.blogspot.de 3
http://previsions.blogspot.com.au 1
http://previsions.blogspot.be 1
http://previsions.blogspot.co.uk 1
http://previsions.blogspot.ch 1
More...

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

2011 02-04 - d sallier - prévision probabiliste 2011 02-04 - d sallier - prévision probabiliste Presentation Transcript

  • Probabilistic demand forecasting Prepared & presented by Daniel SALLIER Traffic Data & Forecasting Director Aéroports de Paris [email_address] 01 70 03 45 68
  • Content
    • Foreground
      • The "classical" forecasting approach
      • Drawbacks of the "classical" forecasting approach
      • 2 generic sources of uncertainty in any forecast
    • How to cope with the intrinsic technical uncertainty
      • What we are looking for …
      • Let's go back to the very basics
      • Step #1: model determination
      • Step #2: determination of the law of probability of the models parameters
      • Step #3: determination of the law of probability of the models output: Y
      • Step #4: determination of the law of probability of the future values
  • Content (continued)
      • The data agregation / break-up issue
      • The data agregation issue
      • The data break-up issue
    • Part of the prospective uncertainty: the residual issue
      • What are residuals?
      • Taking into account part of the prospective risk
    • Further developments and applications
      • Vertical cuts for most of the short term utilisation
      • Horizontal cuts for most of the mid & long term utilisation
    • Conclusions
      • So many advantages, so few drawbacks
  • Foreground
  • The "classical" forecasting approach
    • Econometrical or chronological models most of the time;
    • Assumptions on the future value of the inputs leading to:
      • Single forecasted value (base case?);
      • Scenario based forecast.
    • "Post-processing" of the model outputs by the experts and/or the management;
    1950 1960 1970 1980 1990 2000 2010 2020 Year Passengers (M) Base case High case Low case Historical traffic
  • Drawbacks of the "classical" forecasting approach
    • The "cheating/forgery" risk:
      • "political" figures decided by the management to be "scientifically" justified by the forecasting team;
      • experts eager to be as much consensual as possible with the rest of the community: better to be wrong together than right alone!
    • It ends up with self deception in the company
    • The no ending " what if … " questions asked by a management afraid of having to make up a decision;
    • The forecasting team implicitly deciding what is the level of risk the company should incur ;
    • A single figure or even scenario related figures does not make any sense from a mathematical and statistical point of view.
  • 2 generic sources of uncertainty in any forecast
    • The intrinsic technical uncertainty :
      • Assumptions on the future value of the inputs
      • (GDP, population, fares, …);
      • The very nature of the forecasting model
      • (linear law, exponentiation law, log law, …);
      • The uncertainty on the value of the parameters of the forecasting models;
      • The residuals: the difference between actual values and estimates.
    • The prospective uncertainty ; any "abnormal" event which may happen in the future.
    The techniques developed by ADP's R&D team address mostly the 1st type of generic uncertainty: The intrinsic technical uncertainty
  • How to cope with the intrinsic technical uncertainty
  • What is the output we are looking for …
    • The theory of probabilities provides the tools to answer most of the issues raised by the measurement of the present and the future uncertainty:
    … how to proceed? Dummy data
  • Let's go back to the very basics
    • The full story always starts with a cloud of dots out of which one should find one or several laws/models to be further used as forecasting model(s):
    Actual data
  • Step #1: model determination
    • 1 or several models can fit the data. The way the models are determined is not important (econometrical models, behavioural models, etc.)
    Unless one has precise reason to select a specific model, there is no reasons to keep just one of them and to discard all the others. Each model is given an equal chance. R&D works under process to address this issue: the ADN engine for Alexander’s Drift Net. Actual data Actual data 1 st model Actual data 1 st model 2 nd model Actual data 1 st model 2 nd model 3 rd model
  • Step #2: determination of the law of probability of the models parameters
    • Let's take the 1 st model for instance.
    • It's equation is:
    • where  X is the residual
    • Bootstrap techniques allow to determine the laws of probability of the different parameters (  ,  ,  ,  ) of the model which are strongly correlated to each others.
    0% 1% 2% 3% 4% 5% 6% 7% 8% 9% 17.5 18.0 18.5 19.0 19.5 20.0 20.5  Probability 0% 1% 2% 3% 4% 5% 6% 7% 8% 9% 0.08 0.09 0.10 0.11 0.12 0.13 0.14 0.15 0.16 0.17 0.18  Probability Example of drawings of random samples of the model parameters
  • Step #3: determination of the law of probability of the models output: Y
    • At this stage we have all the probabilistic components of the forecasting model. That's where the Monté-Carlo techniques proves to be useful:
      • Take a future deterministic or sampled value of X;
      • Draw a random sample of the model parameters;
      • Compute the corresponding value of Y;
      • Save the value of Y;
      • Start the process again until a sufficient number of Ys has been collected;
      • Compute the frequency/probability law of Y;
    X Axis Y axis 98% probability for Y to be within the band Actual data 50% probability for Y to be greater or equal Forecasting model #2
  • Step #4: determination of the law of probability of the future values
    • At this stage of the process we have all the probabilistic future values of each forecasting model.
    • That where the Monté-Carlo techniques is used once again to combine all these values and get the final probabilistic forecast.
    • Each model is given an equal probability to occur.
    X axis Y axis 98% probability for Y to be within the band Actual data 50% probability for Y to be greater or equal
  • The data aggregation / break-up issue
  • The data aggregation issue
    • Let's suppose that we are interested in the forecasted demand of the French residents which depends on the French GDP.
    • For a given value of the French GDP, we can calculate a forecasted demand to/from UK, to/from the USA, to/from Japan, etc… It means that, from a statistical point of view, the different flows of traffic from/to France cannot be regarded as being independent variables.
    • Straightforward application of the Monté-Carlo technique would mix around all the random samples along the computation process as if they were fully independent which they are not.
  • The data agregation issue (continued)
    • This problem can be overcome by "flagging" each value of the explanatory variables (i.e. French GDP, British GDP, etc.) and to "stick" the flag(s) value to the intermediate or final random samples which are sharing the same value of the explanatory variable(s).
    • Instead of "mixing around" all the data set, the Monté-Carlo engine just "mixes around" the random samples which are sharing the same flag.
  • The data break-up issue (continued)
    • Let's suppose that the overall business level of risk as been set to 80% of probability for the overall demand to be greater or equal for instance. How does it cascade down? What is the corresponding level of risk of each traffic flow?
    • One should bare in mind that, unfortunately, 1+1  2 when dealing with probabilities; 1 + 1 could make 1.9!
    • Flagging the random samples of each traffic flow is one of the solutions to trace back which ones have been used in the final computation.
    Cumulated distribution of probabilities Overall demand 100% 80% 0% Set of samples to be discarded Demand of the traffic flow # i 100% 74% 0% Set of samples to be elected Cumulated distribution of probabilities Frequency law of the elected samples
  • Part of the prospective uncertainty: the residual issue
  • Taking into account part of the prospective risk
    • A very simple and straightforward idea:
      • Determination of the law of probability of the residuals.
      • Addition of the residual effects to the "regular" probabilistic forecast which can be achieved with a new round of Monté-Carlo simulations.
    • By doing so we can take into account part of the prospective risks: i.e. the risks linked to "unusual" events which already happened in the past and may happen again .
    • Of course there is no statistical or probabilistic methods to estimate the effects of future events which never happened yet; that where scenario based approaches can be brought back to the front stage.
    • This approach answers the amplitude and the likelihood question of the 'unusual" events. It does not answer the when and how long questions: it just measures a "latent risk".
  • Taking into account part of the prospective risk (continued) There is ground here for the development of specific financial / management / industrial tools and policies to cover part of this latent risk 0% 5% 10% 15% 20% 25% -25% -20% -15% -10% -5% 0% 5% 10% 15% Residuals (% of total pax) Probability Probability distribution of the residuals 0 2 4 6 8 10 12 14 16 18 1986 1988 1990 1992 1994 1996 1998 2000 2002 2004 2006 2008 2010 2012 2014 2016 2018 2020 2022 2024 Traffic/demand (M pax) Actual traffic data 50% probability for the demand to be greater or equal No residuals 98% probability range No residuals 0 2 4 6 8 10 12 14 16 18 1986 1988 1990 1992 1994 1996 1998 2000 2002 2004 2006 2008 2010 2012 2014 2016 2018 2020 2022 2024 Traffic/demand (M pax) Actual traffic data 50% probability for the demand to be greater or equal Residuals included 98% probability range Residuals included 0 2 4 6 8 10 12 14 16 18 1986 1988 1990 1992 1994 1996 1998 2000 2002 2004 2006 2008 2010 2012 2014 2016 2018 2020 2022 2024 Traffic/demand (M pax) Actual traffic data 50% probability for the demand to be greater or equal No residuals 98% probability range No residuals 0 2 4 6 8 10 12 14 16 18 1986 1988 1990 1992 1994 1996 1998 2000 2002 2004 2006 2008 2010 2012 2014 2016 2018 2020 2022 2024 Traffic/demand (M pax) Actual traffic data 50% probability for the demand to be greater or equal Residuals included 98% probability range Residuals included 0 2 4 6 8 10 12 14 16 18 1986 1988 1990 1992 1994 1996 1998 2000 2002 2004 2006 2008 2010 2012 2014 2016 2018 2020 2022 2024 Traffic/demand (M pax) Actual traffic data 50% probability for the demand to be greater or equal No residuals 98% probability range No residuals 50% probability for the demand to be greater or equal Residuals included 98% probability range Residuals included
  • Further developments and applications
  • Vertical cuts for most of the short term utilisation Turnover (million €) Probability for the turnover to be greater or equal Capacity threshold Operational Profit (million €) Probability for the operating profit to be greater or equal Capacity threshold € O million etc.
    • To be used for:
    • (human) Resources dimensioning
    • Budget, cash flow
    • Future financial ratios analysis
    • Short term risk assessment
    • etc.
    1986 1988 1990 1992 1994 1996 1998 2000 2002 2004 2006 2008 2010 2012 2014 2016 2018 2020 2022 2024 Traffic/demand Actual capacity Demand/traffic (million pax) Probability for the demand to be greater or equal Capacity threshold
  • Horizontal cuts for most of the mid & long term utilisation 1986 1988 1990 1992 1994 1996 1998 2000 2002 2004 2006 2008 2010 2012 2014 2016 2018 2020 2022 2024 Traffic/demand Actual capacity etc. To be mostly used for optimal dimensioning and planning of mid and long term capacity growth: heavy investments Planned capacity Annual 50% probability - actual capacity 50% probability - planned capacity 98% centred probability - actual capacity 98% centred probability - planned capacity Year 0 Operating profit
  • Conclusions
  • So many advantages, so few drawbacks
    • A quite simple idea, but a rather complex and computer time consuming approach;
    • Put an end to the times when the forecasters were regarded as being fortune-tellers, gurus, devious crooks or scientific alibis for their boss misbehaviour (theirs of their boss' boss too);
    • Bring back the risk taking decision where it should have always been: the top management. In addition it offers the exhaustive set of data required by risk assessment tools;
    • Likely to offer a better legal protection to the forecasters in case of litigation with the share-holders or the financial markets;
    • Our own experience is that bankers are found of this way of making forecast. Aren't they mostly risk traders!
    • We (the ADP's forecasting team) are found of it too, since it saves us a lot of forecasting post-processing time while having no more pressures put on us for finding "convenient figures".