Predictive control Anthony Rossiter Department of Automatic Control and Systems Engineering University of Sheffield www.sh...
MPC <ul><li>Predictive control technique is chosen as being very widely implemented within industry and hence of the wides...
Organisation <ul><li>Introduction to the key concepts of predictive control.  MOST IMPORTANT </li></ul><ul><li>Some numeri...
Review main components of MPC <ul><li>Prediction </li></ul><ul><li>Receding horizon </li></ul><ul><li>Modelling </li></ul>...
Prediction <ul><li>Why is prediction important? </li></ul><ul><li>How far should we predict? </li></ul><ul><li>Consequence...
Illustration Road CAR Horizon
Receding horizon <ul><li>What is a receding horizon? </li></ul><ul><li>Why is it essential ?  (uncertainty/feedback) </li>...
Illustration of initial prediction
Illustration
Illustration
What do we learn <ul><li>Did not observe all constraints at outset – ended up in a dead-end (or DEAD). </li></ul><ul><li>I...
Modelling <ul><li>What is the model used for ? </li></ul><ul><li>How does the model use impact on the modelling steps? </l...
Performance index <ul><li>What is the performance index used for ? </li></ul><ul><li>How should the performance index be d...
Constraint handling <ul><li>How are constraints embedded into most control strategies? </li></ul><ul><li>Do you use  a pos...
Multivariable <ul><li>Ordinarily, how does industry cope with MIMO loops  and what are the consequences? </li></ul><ul><li...
Summary – well posed MPC <ul><li>Modelling efforts should be focussed on efficacy for prediction, including dependence on ...
Organisation <ul><li>Remainder of this talk looks at how we might design an algorithm meeting these requirements. </li></u...
Overview <ul><li>Prediction with transfer function and state space models. </li></ul><ul><li>Formulation of a GPC control ...
Numerical and algebraic details <ul><li>Prediction with state space models. </li></ul><ul><li>Formulation of a GPC control...
Prediction with state space models. <ul><li>You should be aware of the key result. All predictions for linear models take ...
Philosophy of prediction <ul><li>Form a one step ahead prediction model and predict y(k+1). </li></ul><ul><li>Use this mod...
Remark <ul><li>Different algorithms may choose the future control sequence in several ways. </li></ul><ul><li>KEY POINT : ...
Prediction with transfer function and state space models. <ul><li>You should be aware of the key result. All predictions f...
Philosophy of prediction <ul><li>Form a one step ahead prediction model and predict y(k+1). </li></ul><ul><li>Use this mod...
Prediction continued <ul><li>Given prediction is a simple recursion, one can write down all the relevant equations as a gr...
Prediction illustrations in lecture <ul><li>Steps for ARMA prediction with summary. </li></ul><ul><li>Extension to MIMO. <...
GPC control law <ul><li>Define a performance index which measures predicted errors and control activity over some horizons...
GPC control law <ul><li>Write the performance index in more compact form using vectors and matrices. </li></ul><ul><li>Fin...
Gradient operations <ul><li>A quick review of differentiation of vector variables. Let  </li></ul><ul><li>Then </li></ul>
Optimising J <ul><li>Grad operation on J is now by inspection </li></ul>
Control law <ul><li>Take the first component of the input </li></ul><ul><li>We have ignored algebraic details required for...
Control law <ul><li>Take the first component of the input </li></ul>
Control law in z-transforms <ul><li>Next we write the law in terms of z-transforms so that we can analyse poles. </li></ul>
Closed-loop poles <ul><li>As the control law is fixed and linear, we can find the equivalent closed-loop poles. </li></ul>...
Closed-loop poles <ul><li>As the control law is fixed and linear, we can find the equivalent closed-loop poles. </li></ul>...
MIMO case <ul><li>This is identical to the SISO case. </li></ul><ul><li>This is a major advantage of MPC, the basic algebr...
Tuning to get good poles <ul><li>Take a system and explore how the poles move as the horizons are changed. </li></ul><ul><...
GPC may not always be good
Problems with GPC <ul><li>Next few slides illustrate the potential weaknesses of using GPC. </li></ul><ul><li>The fundamen...
Optimised predictions with different nu (1,2,   )
Optimised predictions one sample later Optimum at t Optimum at t+1
ny=50,nu=1,Wu=0.1 Input not close to optimum!
ny=50,nu=2,Wu=0.1 Input not close to closed-loop!
ny=5,nu=1,Wu=1 Input not close to optimum!
Summary <ul><li>GPC algorithms can give ill-posed optimisations. </li></ul><ul><li>A consequence could be very poor behavi...
PFC approaches Ideal path (not actual)
The T-filter <ul><li>Why is this introduced? </li></ul><ul><li>How is it introduced? </li></ul><ul><li>What impact does it...
CARIMA model and prediction <ul><li>Focus on the key steps/philosophy. Fine details are obvious once this is clear. </li><...
CARIMA model and prediction (b) <ul><li>Compare the expressions with and without a T-filter. </li></ul><ul><li>Clearly the...
Change filtered to unfiltered <ul><li>Use Toeplitz and Hankel matrices for convenience of algebra. </li></ul><ul><li>A sim...
Optimising J with a T-filter <ul><li>Modify J to take appropriate prediction </li></ul>
Control law <ul><li>Modify control law accordingly </li></ul>
Control law in z-transforms <ul><li>Eliminate the filtered variables using  </li></ul>
Closed-loop poles – T-filter <ul><li>In simplified terms, the loop controller and hence poles are: </li></ul><ul><li>NOTE:...
Robustness <ul><li>How can an MPC law be tuned to give better robustness? </li></ul><ul><li>What is the role of the T-filt...
Robustness measures <ul><li>Robustness can be defined for many scenarios such as </li></ul><ul><li>Sensitivity to modellin...
Parameter uncertainty <ul><li>Consider first multiplicative uncertainty in G(z). Poles are determined from (1+GK)=0. </li>...
Sensitivity to disturbances <ul><li>Find the transference from disturbance to input (or output). Use forward path over (1+...
Sensitivity to noise <ul><li>Find the transference from noise to input (or output). Using forward path over (1+return path...
Impact of the T-filter on sensitivity <ul><li>T-filter changes sensitivity significantly, as observed below.  </li></ul><u...
Other ways of changing sensitivity <ul><li>The T-filter is known to be effective at reducing input sensitivity to noise, a...
Youla parameterisations <ul><li>Youla parameterisations change the control law without having an impact on nominal trackin...
Selecting Q <ul><li>Because the control parameters, and therefore the sensitivity functions, are affine in Q, one can use ...
Robustness and T-filter <ul><li>Why is this introduced?  Essential in practice </li></ul><ul><li>How is it introduced?  Fi...
Constraints <ul><li>How are constraints introduced into MPC? </li></ul><ul><li>How are constraint equations constructed? <...
Typical constraints <ul><li>Systems often have limits on: </li></ul><ul><li>Inputs </li></ul><ul><li>Input rates </li></ul...
Input constraints and predictions <ul><li>Consider the following limits and an equation testing satisfaction over the inpu...
Input constraints with increments <ul><li>Note that </li></ul><ul><li>and hence:  </li></ul>
Combining input rate and input constraints <ul><li>Input rate constraints and input constraints together take the form </l...
Extension to the MIMO case <ul><li>Replace elements of 1 by identity matrix of suitable dimension.  </li></ul><ul><li>Othe...
Output or state constraints <ul><li>This can be a little more messy, but not if you keep a clear head. </li></ul>
Summary of constraints <ul><li>The constraints all take the form </li></ul><ul><li>Note that d depends upon: </li></ul><ul...
Combining with the cost function <ul><li>Must minimise J, subject to predictions satisfying constraints: </li></ul><ul><li...
Interpreting a QP <ul><li>I will give some illustrations in lectures of what the optimisation looks like. </li></ul><ul><l...
Stability <ul><li>We can find the poles of MPC aposteriori. </li></ul><ul><li>How do we decide, apriori: </li></ul><ul><ul...
Early work <ul><li>In the 1980s many authors suggested very specific combinations of horizons/weights to gaurantee stabili...
Accepted solutions <ul><li>The academic literature has a globally accepted method for ensuring stability and good performa...
The tail <ul><li>The first requirement is that the prediction you choose now, can also be used at the next sampling instan...
Large horizons <ul><li>You must always look far enough ahead so that your predictions contain all the dynamics. </li></ul>...
Summary <ul><li>Infinite horizons and the inclusion of the tail guarantee closed-loop stability (nominal case). </li></ul>...
With infinite horizons, J is Lyapunov <ul><li>Let the optimum predictions at k be </li></ul><ul><li>Now at k+1, inclusion ...
With infinite horizons, J is Lyapunov (b) <ul><li>Now compare the cost at subsequent sampling instants, assuming one uses ...
Proof applies during constraint handling <ul><li>The only difference during constraint handling is that J is minimised sub...
Proof applies with a finite input horizon <ul><li>This is obvious by simply going through the steps of the proof but alter...
Are infinite horizons impractical <ul><li>You need a Lyapunov equation to sum the errors over an infinite horizon. </li></...
Different ways of implementing infinite horizons <ul><li>People usually use dual-mode predictions. </li></ul><ul><ul><li>M...
Dual mode paradigm (or closed-loop prediction) Terminal region in which  the control law u=-Kx satisfies constraints. Init...
Open and closed-loop prediction Model Future inputs Future outputs OPEN LOOP  PREDICTION Model M K Future outputs r Decisi...
Stability and dual mode paradigm <ul><li>Historically:   </li></ul><ul><li>academics spent a lot of time discussing the tu...
Mode 2 choices <ul><li>You need to make a choice between cautious control with large feasible regions and optimal control ...
Feasible regions for n c =1,2,3 nc=3 nc=2 nc=1
Terminal region variation with  different feedback gains
What about DMC? <ul><li>DMC and similar algorithms have been successfully and widely applied. Why is this ‘new’ approach n...
Closed-loop prediction <ul><li>Current thinking is that, ordinarily, one should predict in the closed-loop. </li></ul><ul>...
Closed-loop MPC algorithm <ul><li>Standard MPC objective </li></ul><ul><li>Decision variables are perturbations  c k  to c...
Constraint handling <ul><li>It is well known, that for an LTI model, one can express predictions in the form. </li></ul><u...
MPC algorithm with constraint handling <ul><li>Perform, each sample, the quadratic programming optimisation </li></ul><ul>...
Why does this paradigm give stability <ul><li>Implicitly uses infinite horizons, therefore anticipates and allows for enti...
Summary OMPC <ul><li>Main components are prediction, optimisation and explicit inclusion of constraints. </li></ul><ul><li...
Parametric solutions <ul><li>A major insight of the last few years is the potential of parametric solvers for MPC problems...
Illustration of regions Each region has a different control law
Illustration of regions (b)
Parametric solvers <ul><li>Major benefit is transparency of control law. Huge potential where rigorous testing and validat...
Conclusion <ul><li>Focussed on some key concepts from linear MPC. </li></ul><ul><li>Quickly overviewed how understanding t...
Today’s laboratory <ul><li>You will be given the chance to experiment with different: </li></ul><ul><li>Algorithms </li></...
<ul><li>This resource was created by the University of Sheffield and released as an open educational resource through the ...
Upcoming SlideShare
Loading in...5
×

Concepts of predictive control

1,724

Published on

Introductory course on concepts used in predictive control. For more files and MATLAB suporting information go to:

http://controleducation.group.shef.ac.uk/OER_index.htm

Published in: Devices & Hardware
0 Comments
2 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
1,724
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
84
Comments
0
Likes
2
Embeds 0
No embeds

No notes for slide

Transcript of "Concepts of predictive control"

  1. 1. Predictive control Anthony Rossiter Department of Automatic Control and Systems Engineering University of Sheffield www.shef.ac.uk/acse © University of Sheffield 2009 This work is licensed under a Creative Commons Attribution 2.0 License .
  2. 2. MPC <ul><li>Predictive control technique is chosen as being very widely implemented within industry and hence of the widest potential use. </li></ul><ul><li>Predictive control describes an ‘ approach ’ to control design, not a specific algorithm . </li></ul><ul><li>A user would ideally interpret the approach to define an algorithm suitable for their own needs. </li></ul>
  3. 3. Organisation <ul><li>Introduction to the key concepts of predictive control. MOST IMPORTANT </li></ul><ul><li>Some numerical/algebraic details. OBVIOUS if CONCEPTS UNDERSTOOD </li></ul><ul><li>Some examples. </li></ul><ul><li>The laboratory session. </li></ul><ul><li>The assignment. </li></ul>
  4. 4. Review main components of MPC <ul><li>Prediction </li></ul><ul><li>Receding horizon </li></ul><ul><li>Modelling </li></ul><ul><li>Performance index </li></ul><ul><li>Constraint handling </li></ul><ul><li>Multivariable </li></ul>Having reviewed these components, we can talk about algorithm design and tuning. The key to effective implementation is real understanding of how MPC works!
  5. 5. Prediction <ul><li>Why is prediction important? </li></ul><ul><li>How far should we predict? </li></ul><ul><li>Consequences of not predicting. </li></ul><ul><li>How do we predict? </li></ul><ul><li>How accurate do predictions need to be? </li></ul>
  6. 6. Illustration Road CAR Horizon
  7. 7. Receding horizon <ul><li>What is a receding horizon? </li></ul><ul><li>Why is it essential ? (uncertainty/feedback) </li></ul><ul><li>How is it embedded into MPC? </li></ul><ul><li>What advantages does it bring and what repercussions does it have on prediction accuracy? </li></ul>
  8. 8. Illustration of initial prediction
  9. 9. Illustration
  10. 10. Illustration
  11. 11. What do we learn <ul><li>Did not observe all constraints at outset – ended up in a dead-end (or DEAD). </li></ul><ul><li>If had re-optimised at t=T, would not be able to use previous optimum trajectory. </li></ul>Previous optimum was not optimum! WHAT WAS IT THEN? Optimisation of prediction is meaningless unless it can be implemented. MPC based on optimisation – make sure this well posed !
  12. 12. Modelling <ul><li>What is the model used for ? </li></ul><ul><li>How does the model use impact on the modelling steps? </li></ul><ul><li>What type of model is required: FIR, state space, mental, etc. ? </li></ul><ul><li>How much effort should the modelling take? </li></ul><ul><li>How accurate and/or precise should the model be? </li></ul>
  13. 13. Performance index <ul><li>What is the performance index used for ? </li></ul><ul><li>How should the performance index be designed? </li></ul><ul><li>Trade offs between optimal and safe performance. </li></ul><ul><li>What performance indices do humans use in every day life? How do these change as we get older e.g. (babies to adults)? </li></ul><ul><li>What horizons do we use and why? (skill levels) </li></ul>
  14. 14. Constraint handling <ul><li>How are constraints embedded into most control strategies? </li></ul><ul><li>Do you use a posteriori or a priori design? </li></ul><ul><li>How do humans embed constraints into their behaviour and why? </li></ul><ul><li>How is MPC different from most conventional approaches? </li></ul>
  15. 15. Multivariable <ul><li>Ordinarily, how does industry cope with MIMO loops and what are the consequences? </li></ul><ul><li>How does MPC differ and what are the pros and cons? </li></ul>
  16. 16. Summary – well posed MPC <ul><li>Modelling efforts should be focussed on efficacy for prediction, including dependence on d.o.f.. </li></ul><ul><li>Predictions must capture all the transient and steady-state behaviour. </li></ul><ul><li>Prediction class should include the desired closed-loop behaviour. </li></ul><ul><li>Performance indices must be realistic and matched to model accuracy. </li></ul><ul><li>Constraints must be built in from the beginning. </li></ul><ul><li>Efficient computation requires linear models and simple parameterisations of d.o.f. </li></ul>Who will manage this controller? Their needs are … ? AND
  17. 17. Organisation <ul><li>Remainder of this talk looks at how we might design an algorithm meeting these requirements. </li></ul><ul><li>KEY POINT : MPC is a concept – you must design an algorithm to meet YOUR needs! </li></ul><ul><li>By having clear insight, you should be able to identify the cause when MPC is not delivering. </li></ul>I have designed this talk to help you become a designer, not just a user of existing products. HOWEVER, do ask whatever you most want to know.
  18. 18. Overview <ul><li>Prediction with transfer function and state space models. </li></ul><ul><li>Formulation of a GPC control law. </li></ul><ul><li>Modifying structures to improve robustness. </li></ul><ul><li>Modifying performance indices to improve stability and tuning. </li></ul><ul><li>Including constraints. </li></ul>
  19. 19. Numerical and algebraic details <ul><li>Prediction with state space models. </li></ul><ul><li>Formulation of a GPC control law. </li></ul><ul><li>Robustness. </li></ul><ul><li>Performance and stability. </li></ul><ul><li>Including constraints. </li></ul>
  20. 20. Prediction with state space models. <ul><li>You should be aware of the key result. All predictions for linear models take the form: </li></ul><ul><li>Where x is measured data (e.g. current state), u is future controls and y is predicted output. </li></ul><ul><li>Hence, one can manipulate predicted outputs directly by changing predicted inputs. </li></ul><ul><li>Definitions of H, P are secondary issues. </li></ul>
  21. 21. Philosophy of prediction <ul><li>Form a one step ahead prediction model and predict y(k+1). </li></ul><ul><li>Use this model recursively, so given y(k+i) find y(k+i+1). </li></ul><ul><li>Continue until one has predicted ny steps ahead and group results as on previous slide. </li></ul><ul><li>Hence prediction is equivalent to solving a large set of simultaneous equations. </li></ul><ul><li>In general some filtering is needed to reduce the sensitivity to noisy measurements. </li></ul><ul><li>Coding is trivial if one has access to software with matrix algebra. </li></ul>
  22. 22. Remark <ul><li>Different algorithms may choose the future control sequence in several ways. </li></ul><ul><li>KEY POINT : Your predictions must be unbiased in the steady-state. [See notes] </li></ul>Change in control Distance from expected steady-state
  23. 23. Prediction with transfer function and state space models. <ul><li>You should be aware of the key result. All predictions for linear models take the form: </li></ul><ul><li>Where x is measured data (current state and/or past inputs and outputs). </li></ul><ul><li>Hence, one can manipulate predicted outputs directly by changing predicted inputs. </li></ul><ul><li>Definitions of H, P are secondary issues. </li></ul>
  24. 24. Philosophy of prediction <ul><li>Form a one step ahead prediction model and predict y(k+1). </li></ul><ul><li>Use this model recursively, so given y(k+i) find y(k+i+1). </li></ul><ul><li>In fact Diophantine equations replicate this, but personally I think they obscure what is happening and hence recommend you DO NOT USE THEM! </li></ul>
  25. 25. Prediction continued <ul><li>Given prediction is a simple recursion, one can write down all the relevant equations as a group of simultaneous equations. </li></ul><ul><li>Hence prediction is equivalent to solving a large set of simultaneous equations. </li></ul><ul><li>This approach gives insightful and compact algebra. Also easy to code in MATLAB. </li></ul><ul><li>Easier extension to more complex cases (e.g. MIMO and T-filter). </li></ul>
  26. 26. Prediction illustrations in lecture <ul><li>Steps for ARMA prediction with summary. </li></ul><ul><li>Extension to MIMO. </li></ul><ul><li>Prediction with state space </li></ul>
  27. 27. GPC control law <ul><li>Define a performance index which measures predicted errors and control activity over some horizons: </li></ul><ul><li>Choose the future control moves to minimise the predicted cost, that is, optimise expected performance. </li></ul>
  28. 28. GPC control law <ul><li>Write the performance index in more compact form using vectors and matrices. </li></ul><ul><li>Find the optimum with a grad operator as index is quadratic (always positive). </li></ul>
  29. 29. Gradient operations <ul><li>A quick review of differentiation of vector variables. Let </li></ul><ul><li>Then </li></ul>
  30. 30. Optimising J <ul><li>Grad operation on J is now by inspection </li></ul>
  31. 31. Control law <ul><li>Take the first component of the input </li></ul><ul><li>We have ignored algebraic details required for integral action, disturbance rejection, etc. You can find these in the text books. </li></ul>
  32. 32. Control law <ul><li>Take the first component of the input </li></ul>
  33. 33. Control law in z-transforms <ul><li>Next we write the law in terms of z-transforms so that we can analyse poles. </li></ul>
  34. 34. Closed-loop poles <ul><li>As the control law is fixed and linear, we can find the equivalent closed-loop poles. </li></ul><ul><li>In simplified terms, the loop controller and hence poles are: </li></ul><ul><li>Warning : b(z) will contain a delay. </li></ul>
  35. 35. Closed-loop poles <ul><li>As the control law is fixed and linear, we can find the equivalent closed-loop poles. </li></ul><ul><li>In simplified terms, the loop controller and hence closed-loop system are: </li></ul>
  36. 36. MIMO case <ul><li>This is identical to the SISO case. </li></ul><ul><li>This is a major advantage of MPC, the basic algebra is the same for SISO and the MIMO case. </li></ul>
  37. 37. Tuning to get good poles <ul><li>Take a system and explore how the poles move as the horizons are changed. </li></ul><ul><li>You may find it difficult to see a pattern. </li></ul><ul><li>Low output horizons can give fast responses but possible unstable or poor otherwise. WHY? </li></ul><ul><li>Low input horizons give cautious behaviour – WHY? </li></ul><ul><li>Large output horizons with unit input horizon give open-loop behaviour. WHY? </li></ul><ul><li>Control weighting is only effective if the output horizon is close to the settling time. WHY? </li></ul><ul><li>How many closed-loop poles are there? </li></ul>
  38. 38. GPC may not always be good
  39. 39. Problems with GPC <ul><li>Next few slides illustrate the potential weaknesses of using GPC. </li></ul><ul><li>The fundamental concept is called ‘prediction mismatch’. </li></ul><ul><li>Prediction mismatch means that the class of predictions over which you optimise: </li></ul><ul><ul><li>do not include the prediction you would really like. </li></ul></ul><ul><ul><li>are not close to the actual closed-loop behaviour that arises. </li></ul></ul>We leave aside issues about whether the tuning parameters are intuitive for technical engineers. PFC proposes tuning by time constants.
  40. 40. Optimised predictions with different nu (1,2,  )
  41. 41. Optimised predictions one sample later Optimum at t Optimum at t+1
  42. 42. ny=50,nu=1,Wu=0.1 Input not close to optimum!
  43. 43. ny=50,nu=2,Wu=0.1 Input not close to closed-loop!
  44. 44. ny=5,nu=1,Wu=1 Input not close to optimum!
  45. 45. Summary <ul><li>GPC algorithms can give ill-posed optimisations. </li></ul><ul><li>A consequence could be very poor behaviour. </li></ul><ul><li>Good behaviour could be more luck than design; receding horizon arguments are not reliable! </li></ul><ul><li>Fundamental problem is the choice of control trajectories which may not match the desired behaviour closely enough. </li></ul>In many large and slow processes, the DMC assumption is close to desired behaviour and consequently it works well. I would recommend a careful check of potential mismatch before using an algorithm.
  46. 46. PFC approaches Ideal path (not actual)
  47. 47. The T-filter <ul><li>Why is this introduced? </li></ul><ul><li>How is it introduced? </li></ul><ul><li>What impact does it have on the predictions? (No simple analytic formulae can be given, only intuition and a posteriori observations.) </li></ul><ul><li>Summary : Filter data before predicting, then anti-filter back to original domain. </li></ul>
  48. 48. CARIMA model and prediction <ul><li>Focus on the key steps/philosophy. Fine details are obvious once this is clear. </li></ul><ul><li>Write model in terms of filtered data. </li></ul><ul><li>Form predictions in terms of filtered data in exactly same way as ARMA model. </li></ul><ul><li>Need unfiltered future for the performance index. Filtered past are known values. Translate future filtered values to future unfiltered values. Leave past filtered values as they are. </li></ul>
  49. 49. CARIMA model and prediction (b) <ul><li>Compare the expressions with and without a T-filter. </li></ul><ul><li>Clearly the matrices multiplying past (or known) data have changed. The mapping from the past to the future has changed. </li></ul><ul><li>This change in mapping, changes how noise in the signals effects the predictions. If 1/T is low-pass, it filters out high freq. and hence improves quality. </li></ul>
  50. 50. Change filtered to unfiltered <ul><li>Use Toeplitz and Hankel matrices for convenience of algebra. </li></ul><ul><li>A similar statement can be applied to u. Hence </li></ul>
  51. 51. Optimising J with a T-filter <ul><li>Modify J to take appropriate prediction </li></ul>
  52. 52. Control law <ul><li>Modify control law accordingly </li></ul>
  53. 53. Control law in z-transforms <ul><li>Eliminate the filtered variables using </li></ul>
  54. 54. Closed-loop poles – T-filter <ul><li>In simplified terms, the loop controller and hence poles are: </li></ul><ul><li>NOTE: In fact the poles will include T(z) as a factor (see notes). </li></ul>
  55. 55. Robustness <ul><li>How can an MPC law be tuned to give better robustness? </li></ul><ul><li>What is the role of the T-filter? How can it be selected? </li></ul><ul><li>How can sensitivity be measured? </li></ul><ul><li>We give some introductory views on these issues. </li></ul>
  56. 56. Robustness measures <ul><li>Robustness can be defined for many scenarios such as </li></ul><ul><li>Sensitivity to modelling errors. </li></ul><ul><li>Sensitivity to disturbances. </li></ul><ul><li>Sensitivity to noise. </li></ul><ul><li>In each case one can form the transference from the uncertainty to the input or output. </li></ul>
  57. 57. Parameter uncertainty <ul><li>Consider first multiplicative uncertainty in G(z). Poles are determined from (1+GK)=0. </li></ul>
  58. 58. Sensitivity to disturbances <ul><li>Find the transference from disturbance to input (or output). Use forward path over (1+return path). </li></ul>
  59. 59. Sensitivity to noise <ul><li>Find the transference from noise to input (or output). Using forward path over (1+return path) one gets a similar relationship to that for the disturbances. </li></ul><ul><li>Technically, disturbances are embedded within G and so more complex relationships can arise. This is outside the remit of this course. </li></ul>
  60. 60. Impact of the T-filter on sensitivity <ul><li>T-filter changes sensitivity significantly, as observed below. </li></ul><ul><li>Notable change is a reduction in sensitivity at high frequency with some loss at low/intermediate freq. </li></ul><ul><li>Notice T in the denominator, but note also that the numerator has changed. </li></ul>
  61. 61. Other ways of changing sensitivity <ul><li>The T-filter is known to be effective at reducing input sensitivity to noise, and can help with model uncertainty. However the impact is not deterministic a priori. </li></ul><ul><li>T is not easy to design systematically and in fact sometimes has the opposite effect to that expected. </li></ul><ul><li>We need an alternative approach that gives more direct handles on sensitivity, more akin to formal robust control methods. </li></ul>
  62. 62. Youla parameterisations <ul><li>Youla parameterisations change the control law without having an impact on nominal tracking/poles! </li></ul><ul><li>The change is therefore used to impact on sensitivity with loss of nominal performance. </li></ul><ul><li>As long as Q is stable, can be chosen however you please and will not change nominal tracking! </li></ul>
  63. 63. Selecting Q <ul><li>Because the control parameters, and therefore the sensitivity functions, are affine in Q, one can use simple optimisers to identify the best Q to minimise some freq. domain measure of sensitivity. </li></ul><ul><li>Can be extended to the MIMO case with almost identical algebra (taking some care of commutativity issues). </li></ul>
  64. 64. Robustness and T-filter <ul><li>Why is this introduced? Essential in practice </li></ul><ul><li>How is it introduced? Filter data before predicting, then anti-filter back to original domain. </li></ul><ul><li>What impact does it have on the predictions? Much less sensitive to noise and other high frequency uncertainty (better robustness). </li></ul>
  65. 65. Constraints <ul><li>How are constraints introduced into MPC? </li></ul><ul><li>How are constraint equations constructed? </li></ul><ul><li>What is the impact on the control law? </li></ul>
  66. 66. Typical constraints <ul><li>Systems often have limits on: </li></ul><ul><li>Inputs </li></ul><ul><li>Input rates </li></ul><ul><li>Outputs </li></ul><ul><li>States </li></ul><ul><li>These constraints can be time varying but more normally are constant and apply all the time. </li></ul><ul><li>These constraints must be satisfied by the optimised predictions. </li></ul>
  67. 67. Input constraints and predictions <ul><li>Consider the following limits and an equation testing satisfaction over the input horizon. </li></ul>
  68. 68. Input constraints with increments <ul><li>Note that </li></ul><ul><li>and hence: </li></ul>
  69. 69. Combining input rate and input constraints <ul><li>Input rate constraints and input constraints together take the form </li></ul><ul><li>Where E is lower triangular matrix of ones. </li></ul>
  70. 70. Extension to the MIMO case <ul><li>Replace elements of 1 by identity matrix of suitable dimension. </li></ul><ul><li>Otherwise, the structure will remain the same except that individual components have been replaced by vectors or matrices as appropriate. </li></ul>
  71. 71. Output or state constraints <ul><li>This can be a little more messy, but not if you keep a clear head. </li></ul>
  72. 72. Summary of constraints <ul><li>The constraints all take the form </li></ul><ul><li>Note that d depends upon: </li></ul><ul><li>measurements (past inputs and outputs) </li></ul><ul><li>fixed values such as upper and lower limits. </li></ul><ul><li>Set point, disturbances. </li></ul><ul><li>Hence it must be updated every sample. </li></ul>
  73. 73. Combining with the cost function <ul><li>Must minimise J, subject to predictions satisfying constraints: </li></ul><ul><li>This is called a quadratic programming problem. </li></ul><ul><li>Solution is tractable and quick, even for quite large problems. </li></ul>
  74. 74. Interpreting a QP <ul><li>I will give some illustrations in lectures of what the optimisation looks like. </li></ul><ul><li>Once again we note that the only difference between MIMO and SISO is the dimension of the optimisation, but not in structure or set up. </li></ul><ul><li>Discuss again the impact of constraints on performance and control. </li></ul>
  75. 75. Stability <ul><li>We can find the poles of MPC aposteriori. </li></ul><ul><li>How do we decide, apriori: </li></ul><ul><ul><li>what are good values for the horizons and weights? </li></ul></ul><ul><ul><li>will the nominal closed-loop system be stable? </li></ul></ul><ul><ul><li>what happens when constraints are included? </li></ul></ul>
  76. 76. Early work <ul><li>In the 1980s many authors suggested very specific combinations of horizons/weights to gaurantee stability. However most of this work is pointless as it gives little useful insight, especially with regard to performance. </li></ul><ul><li>We are most interested in, what choice of horizons is likely to give good performance, as stability is then automatic. </li></ul>
  77. 77. Accepted solutions <ul><li>The academic literature has a globally accepted method for ensuring stability and good performance. </li></ul><ul><li>This method is powerful because it does not rely on linear analysis and hence also applies to the constraint handling case (assuming recursive feasibility). </li></ul><ul><li>In fact, the underlying requirements are common sense and similar to strategies used by humans. </li></ul>
  78. 78. The tail <ul><li>The first requirement is that the prediction you choose now, can also be used at the next sampling instant. </li></ul><ul><li>Hence, if you choose a sensible strategy now, and it is working, you must be able to continue with that strategy when you update your decisions. </li></ul><ul><li>We call this riding on the tail, you pick up the part of the strategy as yet not implemented and continue with it. </li></ul><ul><li>This puts very specific requirements on the class of predictions. The tail must always be in the class. </li></ul>
  79. 79. Large horizons <ul><li>You must always look far enough ahead so that your predictions contain all the dynamics. </li></ul><ul><li>That is, the predictions should be constant beyond the horizon, otherwise the ignored part of the predictions may include undesirable behaviour, which will be inherited in subsequent samples. </li></ul><ul><li>Large enough usually means input horizon + settling time. </li></ul>
  80. 80. Summary <ul><li>Infinite horizons and the inclusion of the tail guarantee closed-loop stability (nominal case). </li></ul><ul><li>There is no need to compute the closed-loop poles. </li></ul><ul><li>The proof applies even in the presence of constraints and therefore extends to nonlinear behaviour. </li></ul><ul><li>Proof given next. </li></ul>
  81. 81. With infinite horizons, J is Lyapunov <ul><li>Let the optimum predictions at k be </li></ul><ul><li>Now at k+1, inclusion of the tail implies on can select: </li></ul><ul><li>Next consider that the cost is defined as </li></ul>
  82. 82. With infinite horizons, J is Lyapunov (b) <ul><li>Now compare the cost at subsequent sampling instants, assuming one uses the tail. </li></ul><ul><li>It should be clear that </li></ul><ul><li>That is, J is always reducing unless (r=y) and Du=0. This can only happen repeatedly if one is already at the desired steady-state. Moreover, one can always re-optimise at each sample to make K even smaller still. </li></ul>
  83. 83. Proof applies during constraint handling <ul><li>The only difference during constraint handling is that J is minimised subject to constraints. </li></ul><ul><li>If predictions at time k satisfy constraints, then so does the tail. Hence the tail can be used at k+1 and the Lyapunov property still holds. </li></ul><ul><li>There is however a need for feasibility, that is one must assume that there exists, at the outset, a prediction class which satisfies constraints. </li></ul>
  84. 84. Proof applies with a finite input horizon <ul><li>This is obvious by simply going through the steps of the proof but altering the costing horizon on the inputs to be nu. </li></ul><ul><li>All the steps of the proof are identical because restricting nu simply adds lots of zeros for the future control increments. </li></ul>
  85. 85. Are infinite horizons impractical <ul><li>You need a Lyapunov equation to sum the errors over an infinite horizon. </li></ul><ul><li>This is straightforward when the prediction dynamics are linear. </li></ul>
  86. 86. Different ways of implementing infinite horizons <ul><li>People usually use dual-mode predictions. </li></ul><ul><ul><li>Mode 1, immediate transients, one has total freedom in the control chosen. </li></ul></ul><ul><ul><li>Mode 2, is the asymptotic behaviour and is chosen with predetermined dynamics. </li></ul></ul><ul><li>The choice of mode 2 is the main flexibility in the design. </li></ul>
  87. 87. Dual mode paradigm (or closed-loop prediction) Terminal region in which the control law u=-Kx satisfies constraints. Initial state trajectory nc moves maximum Terminal State In at most nc samples, predicted state moves into the terminal region while satisfying constraints.
  88. 88. Open and closed-loop prediction Model Future inputs Future outputs OPEN LOOP PREDICTION Model M K Future outputs r Decision variables CLOSED LOOP PREDICTION
  89. 89. Stability and dual mode paradigm <ul><li>Historically: </li></ul><ul><li>academics spent a lot of time discussing the tuning and stability of MPC. </li></ul><ul><li>it was thought that input/output horizons and control weighting were ‘tuning’ parameters. </li></ul><ul><li>there were few stability results. </li></ul><ul><li>Now: </li></ul><ul><li>most people advise the dual mode paradigm. </li></ul><ul><li>the tuning parameter is the terminal control law. </li></ul><ul><li>the ‘control horizon’ affects feasibility and computational load. </li></ul>
  90. 90. Mode 2 choices <ul><li>You need to make a choice between cautious control with large feasible regions and optimal control with better performance but smaller regions of applicability (and perhaps less robustness). </li></ul><ul><li>I will outline typical choices in the lecture. </li></ul>
  91. 91. Feasible regions for n c =1,2,3 nc=3 nc=2 nc=1
  92. 92. Terminal region variation with different feedback gains
  93. 93. What about DMC? <ul><li>DMC and similar algorithms have been successfully and widely applied. Why is this ‘new’ approach needed? </li></ul><ul><li>Insights : </li></ul><ul><li>explain the success of DMC and therefore improve user confidence. </li></ul><ul><li>give better understanding of the limitations of DMC and what needs changing. </li></ul><ul><li>Summary: </li></ul><ul><li>DMC can be considered as a dual mode law with a terminal control law of K=0 (open-loop) as industrial practise has favoured large output horizons (effectively equivalent to infinity). </li></ul><ul><li>Will be less effective when open-loop behaviour is poor. </li></ul>
  94. 94. Closed-loop prediction <ul><li>Current thinking is that, ordinarily, one should predict in the closed-loop. </li></ul><ul><li>Better conditioned predictions and embeds desired behaviour. </li></ul><ul><li>Predictions automatically close to desired behaviour, more robust optimisation. </li></ul><ul><li>Especially important for open-loop unstable plant. </li></ul><ul><li>Makes tuning more straightforward. </li></ul><ul><li>Handling feasibility can be easier. </li></ul>NOTE: Differs from DMC which assumes default behaviour is open-loop dynamics. PFC partially meets this aim (uses open-loop predictions but matches to a desired dynamic).
  95. 95. Closed-loop MPC algorithm <ul><li>Standard MPC objective </li></ul><ul><li>Decision variables are perturbations c k to control trajectory </li></ul><ul><li>J is equivalent to </li></ul>Nominal behaviour Ideally c =0
  96. 96. Constraint handling <ul><li>It is well known, that for an LTI model, one can express predictions in the form. </li></ul><ul><li>Therefore constraint satisfaction of the predictions is equivalent to set membership of S. </li></ul>
  97. 97. MPC algorithm with constraint handling <ul><li>Perform, each sample, the quadratic programming optimisation </li></ul><ul><li>Implement the optimum control as follows: </li></ul>More detail of these computations are in the handout.
  98. 98. Why does this paradigm give stability <ul><li>Implicitly uses infinite horizons, therefore anticipates and allows for entire future. </li></ul><ul><li>As long as d.o.f. parameterised such that one can re-use old decisions, one can continue to ride on good policies to convergence. </li></ul><ul><li>Optimise around ‘good’ trajectories, therefore well conditionned. </li></ul>No dead ends!
  99. 99. Summary OMPC <ul><li>Main components are prediction, optimisation and explicit inclusion of constraints. </li></ul><ul><li>Now accepted that the dual mode (or closed-loop) paradigm is a good mechanism for ensuring stability and well posed optimisation. </li></ul><ul><li>Implicitly uses infinite costing horizons. </li></ul><ul><li>Tuning depends on the terminal law and feasibility on the terminal law and number of d.o.f. </li></ul>
  100. 100. Parametric solutions <ul><li>A major insight of the last few years is the potential of parametric solvers for MPC problems. </li></ul><ul><li>Instead of an online optimisation: </li></ul><ul><li>parameterise all possible optimisations </li></ul><ul><li>solve these offline </li></ul><ul><li>Online: determine and implement appropriate solution. </li></ul>
  101. 101. Illustration of regions Each region has a different control law
  102. 102. Illustration of regions (b)
  103. 103. Parametric solvers <ul><li>Major benefit is transparency of control law. Huge potential where rigorous testing and validation is required. </li></ul><ul><li>Downside is large number of regions. </li></ul><ul><ul><li>Identifying active region can be more demanding than solving original QP. </li></ul></ul><ul><ul><li>Data storage requirements for regions. </li></ul></ul><ul><li>For small dimension problems, may be invaluable and far simpler to implement. [In essence a Look-up table] </li></ul>
  104. 104. Conclusion <ul><li>Focussed on some key concepts from linear MPC. </li></ul><ul><li>Quickly overviewed how understanding the key concepts allows the development of algorithms which: </li></ul><ul><ul><li>Make good engineering sense </li></ul></ul><ul><ul><li>Have a priori results on performance and stability. </li></ul></ul><ul><ul><li>Allows the user to quickly identify what is wrong when MPC fails. </li></ul></ul>MPC is very flexible. Don’t assume you have to go with an off the bench algorithm. If you are unsure about anything today, please come and talk to me.
  105. 105. Today’s laboratory <ul><li>You will be given the chance to experiment with different: </li></ul><ul><li>Algorithms </li></ul><ul><li>Tuning parameters. </li></ul><ul><li>Uncertainty. </li></ul><ul><li>The software is simple to enable insight and easy editing, but not intended for general distribution. </li></ul>
  106. 106. <ul><li>This resource was created by the University of Sheffield and released as an open educational resource through the Open Engineering Resources project of the HE Academy Engineering Subject Centre. The Open Engineering Resources project was funded by HEFCE and part of the JISC/HE Academy UKOER programme. </li></ul><ul><li>© 2009 University of Sheffield </li></ul><ul><li>This work is licensed under a Creative Commons Attribution 2.0 License . </li></ul><ul><li> </li></ul><ul><li>The JISC logo is licensed under the terms of the Creative Commons Attribution-Non-Commercial-No Derivative Works 2.0 UK: England & Wales Licence.  All reproductions must comply with the terms of that licence. </li></ul><ul><li>The HEA logo is owned by the Higher Education Academy Limited may be freely distributed and copied for educational purposes only, provided that appropriate acknowledgement is given to the Higher Education Academy as the copyright holder and original publisher. </li></ul><ul><li>The name and logo of University of Sheffield is a trade mark and all rights in it are reserved. The name and logo should not be reproduced without the express authorisation of the University. </li></ul><ul><li>Where Matlab® screenshots are included, they appear courtesy of The MathWorks , Inc. </li></ul>
  1. A particular slide catching your eye?

    Clipping is a handy way to collect important slides you want to go back to later.

×